Jan 31 04:43:09 crc systemd[1]: Starting Kubernetes Kubelet... Jan 31 04:43:09 crc restorecon[4655]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:09 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:43:10 crc restorecon[4655]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 04:43:10 crc restorecon[4655]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 31 04:43:11 crc kubenswrapper[4832]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 04:43:11 crc kubenswrapper[4832]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 31 04:43:11 crc kubenswrapper[4832]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 04:43:11 crc kubenswrapper[4832]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 04:43:11 crc kubenswrapper[4832]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 31 04:43:11 crc kubenswrapper[4832]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.571236 4832 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577294 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577328 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577340 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577350 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577358 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577368 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577380 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577390 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577402 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577413 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577423 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577433 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577452 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577462 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577471 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577479 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577487 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577495 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577503 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577511 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577519 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577526 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577534 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577542 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577550 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577580 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577588 4832 feature_gate.go:330] unrecognized feature gate: Example Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577595 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577603 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577612 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577620 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577628 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577636 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577643 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577651 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577659 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577672 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577683 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577693 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577703 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577712 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577724 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577734 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577744 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577754 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577763 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577771 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577778 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577786 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577794 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577802 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577811 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577819 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577827 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577835 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577843 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577851 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577861 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577869 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577876 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577884 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577891 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577899 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577906 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577913 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577922 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577930 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577940 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577947 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577955 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.577963 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578102 4832 flags.go:64] FLAG: --address="0.0.0.0" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578119 4832 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578133 4832 flags.go:64] FLAG: --anonymous-auth="true" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578144 4832 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578155 4832 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578164 4832 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578175 4832 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578186 4832 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578196 4832 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578205 4832 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578215 4832 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578224 4832 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578233 4832 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578242 4832 flags.go:64] FLAG: --cgroup-root="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578250 4832 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578259 4832 flags.go:64] FLAG: --client-ca-file="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578269 4832 flags.go:64] FLAG: --cloud-config="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578277 4832 flags.go:64] FLAG: --cloud-provider="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578287 4832 flags.go:64] FLAG: --cluster-dns="[]" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578297 4832 flags.go:64] FLAG: --cluster-domain="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578306 4832 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578315 4832 flags.go:64] FLAG: --config-dir="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578323 4832 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578334 4832 flags.go:64] FLAG: --container-log-max-files="5" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578345 4832 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578354 4832 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578363 4832 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578372 4832 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578381 4832 flags.go:64] FLAG: --contention-profiling="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578389 4832 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578398 4832 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578408 4832 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578417 4832 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578428 4832 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578437 4832 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578446 4832 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578455 4832 flags.go:64] FLAG: --enable-load-reader="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578464 4832 flags.go:64] FLAG: --enable-server="true" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578473 4832 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578484 4832 flags.go:64] FLAG: --event-burst="100" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578493 4832 flags.go:64] FLAG: --event-qps="50" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578502 4832 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578512 4832 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578520 4832 flags.go:64] FLAG: --eviction-hard="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578531 4832 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578540 4832 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578549 4832 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578583 4832 flags.go:64] FLAG: --eviction-soft="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578592 4832 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578601 4832 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578610 4832 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578619 4832 flags.go:64] FLAG: --experimental-mounter-path="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578628 4832 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578639 4832 flags.go:64] FLAG: --fail-swap-on="true" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578648 4832 flags.go:64] FLAG: --feature-gates="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578659 4832 flags.go:64] FLAG: --file-check-frequency="20s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578667 4832 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578677 4832 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578686 4832 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578695 4832 flags.go:64] FLAG: --healthz-port="10248" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578704 4832 flags.go:64] FLAG: --help="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578713 4832 flags.go:64] FLAG: --hostname-override="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578722 4832 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578731 4832 flags.go:64] FLAG: --http-check-frequency="20s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578740 4832 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578748 4832 flags.go:64] FLAG: --image-credential-provider-config="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578757 4832 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578767 4832 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578777 4832 flags.go:64] FLAG: --image-service-endpoint="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578786 4832 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578795 4832 flags.go:64] FLAG: --kube-api-burst="100" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578804 4832 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578815 4832 flags.go:64] FLAG: --kube-api-qps="50" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578824 4832 flags.go:64] FLAG: --kube-reserved="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578833 4832 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578841 4832 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578850 4832 flags.go:64] FLAG: --kubelet-cgroups="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578859 4832 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578868 4832 flags.go:64] FLAG: --lock-file="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578877 4832 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578886 4832 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578895 4832 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578908 4832 flags.go:64] FLAG: --log-json-split-stream="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578917 4832 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578925 4832 flags.go:64] FLAG: --log-text-split-stream="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578935 4832 flags.go:64] FLAG: --logging-format="text" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578943 4832 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578953 4832 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578962 4832 flags.go:64] FLAG: --manifest-url="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578970 4832 flags.go:64] FLAG: --manifest-url-header="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578982 4832 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.578991 4832 flags.go:64] FLAG: --max-open-files="1000000" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579001 4832 flags.go:64] FLAG: --max-pods="110" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579010 4832 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579019 4832 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579028 4832 flags.go:64] FLAG: --memory-manager-policy="None" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579037 4832 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579047 4832 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579056 4832 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579065 4832 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579085 4832 flags.go:64] FLAG: --node-status-max-images="50" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579094 4832 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579102 4832 flags.go:64] FLAG: --oom-score-adj="-999" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579111 4832 flags.go:64] FLAG: --pod-cidr="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579120 4832 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579134 4832 flags.go:64] FLAG: --pod-manifest-path="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579143 4832 flags.go:64] FLAG: --pod-max-pids="-1" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579152 4832 flags.go:64] FLAG: --pods-per-core="0" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579162 4832 flags.go:64] FLAG: --port="10250" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579171 4832 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579180 4832 flags.go:64] FLAG: --provider-id="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579188 4832 flags.go:64] FLAG: --qos-reserved="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579197 4832 flags.go:64] FLAG: --read-only-port="10255" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579206 4832 flags.go:64] FLAG: --register-node="true" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579217 4832 flags.go:64] FLAG: --register-schedulable="true" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579225 4832 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579249 4832 flags.go:64] FLAG: --registry-burst="10" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579258 4832 flags.go:64] FLAG: --registry-qps="5" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579267 4832 flags.go:64] FLAG: --reserved-cpus="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579276 4832 flags.go:64] FLAG: --reserved-memory="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579286 4832 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579295 4832 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579304 4832 flags.go:64] FLAG: --rotate-certificates="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579313 4832 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579322 4832 flags.go:64] FLAG: --runonce="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579331 4832 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579340 4832 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579348 4832 flags.go:64] FLAG: --seccomp-default="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579357 4832 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579366 4832 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579375 4832 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579384 4832 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579393 4832 flags.go:64] FLAG: --storage-driver-password="root" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579401 4832 flags.go:64] FLAG: --storage-driver-secure="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579410 4832 flags.go:64] FLAG: --storage-driver-table="stats" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579419 4832 flags.go:64] FLAG: --storage-driver-user="root" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579427 4832 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579436 4832 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579445 4832 flags.go:64] FLAG: --system-cgroups="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579454 4832 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579467 4832 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579476 4832 flags.go:64] FLAG: --tls-cert-file="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579484 4832 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579494 4832 flags.go:64] FLAG: --tls-min-version="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579504 4832 flags.go:64] FLAG: --tls-private-key-file="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579513 4832 flags.go:64] FLAG: --topology-manager-policy="none" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579523 4832 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579532 4832 flags.go:64] FLAG: --topology-manager-scope="container" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579540 4832 flags.go:64] FLAG: --v="2" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579552 4832 flags.go:64] FLAG: --version="false" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579586 4832 flags.go:64] FLAG: --vmodule="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579596 4832 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.579605 4832 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579799 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579812 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579821 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579831 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579838 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579846 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579855 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579863 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579872 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579879 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579887 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579895 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579903 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579911 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579926 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579934 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579941 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579949 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579957 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579965 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579973 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579980 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.579988 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580003 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580011 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580019 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580026 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580035 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580043 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580053 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580062 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580071 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580079 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580086 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580095 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580103 4832 feature_gate.go:330] unrecognized feature gate: Example Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580111 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580121 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580131 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580140 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580149 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580157 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580165 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580172 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580180 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580188 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580198 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580206 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580214 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580222 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580230 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580237 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580245 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580252 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580260 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580271 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580278 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580288 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580298 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580306 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580315 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580323 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580331 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580340 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580349 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580357 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580365 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580373 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580381 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580389 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.580400 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.580423 4832 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.596129 4832 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.596187 4832 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596315 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596339 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596351 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596363 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596373 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596382 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596391 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596399 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596408 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596416 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596423 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596431 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596439 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596447 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596455 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596462 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596470 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596481 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596493 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596502 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596511 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596519 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596529 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596539 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596548 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596590 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596603 4832 feature_gate.go:330] unrecognized feature gate: Example Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596615 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596625 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596633 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596642 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596689 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596699 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596708 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596716 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596723 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596732 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596740 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596748 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596756 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596767 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596778 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596787 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596796 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596806 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596814 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596823 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596831 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596840 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596849 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596857 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596866 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596873 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596882 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596890 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596900 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596910 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596921 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596929 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596940 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596950 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596959 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596969 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596978 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596986 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.596994 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597002 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597009 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597017 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597026 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597033 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.597048 4832 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597290 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597306 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597315 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597324 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597332 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597341 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597349 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597358 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597367 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597376 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597386 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597396 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597405 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597414 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597423 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597432 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597444 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597455 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597466 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597475 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597483 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597491 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597502 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597512 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597521 4832 feature_gate.go:330] unrecognized feature gate: Example Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597531 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597540 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597550 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597591 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597604 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597615 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597626 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597634 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597643 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597653 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597663 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597671 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597680 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597689 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597698 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597706 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597716 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597724 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597733 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597743 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597752 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597760 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597769 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597779 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597787 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597796 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597804 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597812 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597820 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597828 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597836 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597844 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597851 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597859 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597866 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597874 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597882 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597890 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597898 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597905 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597913 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597921 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597928 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597936 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597943 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.597951 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.597963 4832 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.598218 4832 server.go:940] "Client rotation is on, will bootstrap in background" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.604473 4832 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.604699 4832 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.606882 4832 server.go:997] "Starting client certificate rotation" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.606923 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.608909 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-26 05:38:00.009977775 +0000 UTC Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.609021 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.635355 4832 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 04:43:11 crc kubenswrapper[4832]: E0131 04:43:11.639311 4832 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.639780 4832 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.657800 4832 log.go:25] "Validated CRI v1 runtime API" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.708260 4832 log.go:25] "Validated CRI v1 image API" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.712553 4832 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.718718 4832 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-31-04-38-49-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.718772 4832 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.748502 4832 manager.go:217] Machine: {Timestamp:2026-01-31 04:43:11.744282624 +0000 UTC m=+0.693104399 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:31767ebb-3087-408c-bd64-29e9bda9f554 BootID:c783a103-3bac-43f3-9bbb-fd265be6128f Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:20:96:d1 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:20:96:d1 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:41:27:8d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:29:2a:7b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:9e:88:d8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:3a:69:fb Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fe:4d:2b:8d:e5:e9 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ee:2a:61:55:05:44 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.748952 4832 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.749172 4832 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.753366 4832 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.753794 4832 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.753847 4832 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.754181 4832 topology_manager.go:138] "Creating topology manager with none policy" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.754203 4832 container_manager_linux.go:303] "Creating device plugin manager" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.754922 4832 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.754979 4832 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.755393 4832 state_mem.go:36] "Initialized new in-memory state store" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.755534 4832 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.761176 4832 kubelet.go:418] "Attempting to sync node with API server" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.761227 4832 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.761279 4832 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.761302 4832 kubelet.go:324] "Adding apiserver pod source" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.761333 4832 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.764735 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Jan 31 04:43:11 crc kubenswrapper[4832]: E0131 04:43:11.764845 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.765726 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Jan 31 04:43:11 crc kubenswrapper[4832]: E0131 04:43:11.765851 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.769611 4832 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.770529 4832 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.772171 4832 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.774518 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.774600 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.774620 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.774636 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.774662 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.774678 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.774693 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.774723 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.774743 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.774759 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.774795 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.774811 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.774864 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.775656 4832 server.go:1280] "Started kubelet" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.776074 4832 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.776187 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.776545 4832 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.777208 4832 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 31 04:43:11 crc systemd[1]: Started Kubernetes Kubelet. Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.778739 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.778795 4832 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.778974 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:53:15.996494898 +0000 UTC Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.779466 4832 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.779519 4832 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.780123 4832 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 31 04:43:11 crc kubenswrapper[4832]: E0131 04:43:11.780140 4832 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.782001 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Jan 31 04:43:11 crc kubenswrapper[4832]: E0131 04:43:11.782161 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.782381 4832 factory.go:55] Registering systemd factory Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.782411 4832 factory.go:221] Registration of the systemd container factory successfully Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.782881 4832 factory.go:153] Registering CRI-O factory Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.784087 4832 factory.go:221] Registration of the crio container factory successfully Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.784889 4832 server.go:460] "Adding debug handlers to kubelet server" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.785035 4832 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.785258 4832 factory.go:103] Registering Raw factory Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.785299 4832 manager.go:1196] Started watching for new ooms in manager Jan 31 04:43:11 crc kubenswrapper[4832]: E0131 04:43:11.785231 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="200ms" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.787945 4832 manager.go:319] Starting recovery of all containers Jan 31 04:43:11 crc kubenswrapper[4832]: E0131 04:43:11.791823 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.136:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fb72f29fbb024 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 04:43:11.775600676 +0000 UTC m=+0.724422411,LastTimestamp:2026-01-31 04:43:11.775600676 +0000 UTC m=+0.724422411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806583 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806660 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806680 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806698 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806713 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806732 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806747 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806762 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806780 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806796 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806811 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806825 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806841 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806858 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806887 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806906 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806921 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806935 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806950 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806966 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806980 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.806995 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.807010 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.807025 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.807040 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.807058 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.807081 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.807101 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.807118 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.807176 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.807194 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.807212 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.807228 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.807245 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.807259 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.807281 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.807296 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.811905 4832 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.811965 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.811987 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812004 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812021 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812037 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812050 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812062 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812076 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812088 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812100 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812113 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812126 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812140 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812152 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812166 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812189 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812205 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812220 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812234 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812247 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812263 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812277 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812291 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812306 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812324 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812340 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812354 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812384 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812398 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812412 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812425 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812440 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812453 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812467 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812480 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812498 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812510 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812523 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812538 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812551 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812628 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812644 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812656 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812677 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812696 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812709 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812722 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812735 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812748 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812765 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812777 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812791 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812808 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812822 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812834 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812846 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812858 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812870 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812883 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812896 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812909 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812922 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812935 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812947 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812960 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812972 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.812985 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813007 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813023 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813037 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813051 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813064 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813076 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813089 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813105 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813118 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813134 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813147 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813159 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813171 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813183 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813203 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813217 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813230 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813244 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813263 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813276 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813289 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813301 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813313 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813328 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813346 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813359 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813377 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813390 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813408 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813424 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813438 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813452 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813466 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813479 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813492 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813504 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813516 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813533 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813545 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813639 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813659 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813673 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813689 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813703 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813717 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813730 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813744 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813759 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813775 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813787 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813805 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813818 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813831 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.813933 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814002 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814028 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814051 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814073 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814094 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814115 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814141 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814161 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814183 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814205 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814228 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814251 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814279 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814300 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814324 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814345 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814374 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814396 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814419 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814439 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814467 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814488 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814513 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814534 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814583 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814607 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814628 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814654 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814674 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814701 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814729 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814753 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814776 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814797 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814818 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814838 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814858 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814879 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814902 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814922 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814942 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814962 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.814981 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.815002 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.815022 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.815042 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.815064 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.815087 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.815106 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.815127 4832 reconstruct.go:97] "Volume reconstruction finished" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.815144 4832 reconciler.go:26] "Reconciler: start to sync state" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.832244 4832 manager.go:324] Recovery completed Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.847208 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.849357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.849419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.849435 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.850926 4832 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.850944 4832 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.850969 4832 state_mem.go:36] "Initialized new in-memory state store" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.855731 4832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.858009 4832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.858063 4832 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.858100 4832 kubelet.go:2335] "Starting kubelet main sync loop" Jan 31 04:43:11 crc kubenswrapper[4832]: E0131 04:43:11.858272 4832 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 31 04:43:11 crc kubenswrapper[4832]: W0131 04:43:11.858986 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Jan 31 04:43:11 crc kubenswrapper[4832]: E0131 04:43:11.859051 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.877160 4832 policy_none.go:49] "None policy: Start" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.878321 4832 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.878377 4832 state_mem.go:35] "Initializing new in-memory state store" Jan 31 04:43:11 crc kubenswrapper[4832]: E0131 04:43:11.880652 4832 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.945460 4832 manager.go:334] "Starting Device Plugin manager" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.945872 4832 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.945906 4832 server.go:79] "Starting device plugin registration server" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.946599 4832 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.946659 4832 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.946870 4832 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.947012 4832 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.947029 4832 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 04:43:11 crc kubenswrapper[4832]: E0131 04:43:11.953899 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.959049 4832 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.959136 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.960105 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.960162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.960184 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.960442 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.960819 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.960876 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.964828 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.965539 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.965645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.964843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.965809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.965839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.966109 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.966300 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.966395 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.967141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.967186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.967205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.967411 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.967749 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.967869 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.967773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.967971 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.967988 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.968605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.968633 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.968649 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.968788 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.968792 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.968901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.968917 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.969024 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.969099 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.969583 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.969677 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.969804 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.970081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.970108 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.970121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.970327 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.970415 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.971121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.971267 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:11 crc kubenswrapper[4832]: I0131 04:43:11.971413 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:11 crc kubenswrapper[4832]: E0131 04:43:11.986535 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="400ms" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.017892 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.017929 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.017953 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.017969 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.017986 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.018004 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.018022 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.018040 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.018060 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.018077 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.018094 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.018125 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.018140 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.018157 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.018172 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.047550 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.048996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.049051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.049068 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.049116 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 04:43:12 crc kubenswrapper[4832]: E0131 04:43:12.049700 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.136:6443: connect: connection refused" node="crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.119888 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.119956 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120011 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120042 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120071 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120099 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120130 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120157 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120208 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120209 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120256 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120300 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120345 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120371 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120417 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120424 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120446 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120475 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120538 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120549 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120606 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120635 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120661 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120377 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120692 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120639 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120727 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120762 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120779 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.120877 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.250312 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.252220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.252284 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.252304 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.252353 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 04:43:12 crc kubenswrapper[4832]: E0131 04:43:12.253284 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.136:6443: connect: connection refused" node="crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.309178 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.313900 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.326937 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.334252 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.338293 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:12 crc kubenswrapper[4832]: W0131 04:43:12.354433 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-8b8049d7f5420437cdb2d9f9fd5802a6c4543fea1bd5c0cc533fa6a00794f66d WatchSource:0}: Error finding container 8b8049d7f5420437cdb2d9f9fd5802a6c4543fea1bd5c0cc533fa6a00794f66d: Status 404 returned error can't find the container with id 8b8049d7f5420437cdb2d9f9fd5802a6c4543fea1bd5c0cc533fa6a00794f66d Jan 31 04:43:12 crc kubenswrapper[4832]: W0131 04:43:12.356132 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4f46f0e5c4fef7e86ee67736dbee20fcd68c6a4b766232c231af9b98069fbc4c WatchSource:0}: Error finding container 4f46f0e5c4fef7e86ee67736dbee20fcd68c6a4b766232c231af9b98069fbc4c: Status 404 returned error can't find the container with id 4f46f0e5c4fef7e86ee67736dbee20fcd68c6a4b766232c231af9b98069fbc4c Jan 31 04:43:12 crc kubenswrapper[4832]: W0131 04:43:12.361069 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1d4f94fa043ce7ca046a411f316de90701d36025802d809c26993fa2c5ac6e5e WatchSource:0}: Error finding container 1d4f94fa043ce7ca046a411f316de90701d36025802d809c26993fa2c5ac6e5e: Status 404 returned error can't find the container with id 1d4f94fa043ce7ca046a411f316de90701d36025802d809c26993fa2c5ac6e5e Jan 31 04:43:12 crc kubenswrapper[4832]: W0131 04:43:12.364489 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-47119d21a3524b40f11be6443840917c594f521de43dc8281656820843c72231 WatchSource:0}: Error finding container 47119d21a3524b40f11be6443840917c594f521de43dc8281656820843c72231: Status 404 returned error can't find the container with id 47119d21a3524b40f11be6443840917c594f521de43dc8281656820843c72231 Jan 31 04:43:12 crc kubenswrapper[4832]: W0131 04:43:12.366350 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-00c0e3fcfbddc227f7bd8a396bd3bfec6eadaf602b91d6032a1a4989a2ccb177 WatchSource:0}: Error finding container 00c0e3fcfbddc227f7bd8a396bd3bfec6eadaf602b91d6032a1a4989a2ccb177: Status 404 returned error can't find the container with id 00c0e3fcfbddc227f7bd8a396bd3bfec6eadaf602b91d6032a1a4989a2ccb177 Jan 31 04:43:12 crc kubenswrapper[4832]: E0131 04:43:12.388188 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="800ms" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.653755 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.655684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.655752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.655773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.655820 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 04:43:12 crc kubenswrapper[4832]: E0131 04:43:12.656594 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.136:6443: connect: connection refused" node="crc" Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.778032 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.780105 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 20:23:30.782354289 +0000 UTC Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.864718 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"00c0e3fcfbddc227f7bd8a396bd3bfec6eadaf602b91d6032a1a4989a2ccb177"} Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.866403 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"47119d21a3524b40f11be6443840917c594f521de43dc8281656820843c72231"} Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.867777 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1d4f94fa043ce7ca046a411f316de90701d36025802d809c26993fa2c5ac6e5e"} Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.869702 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4f46f0e5c4fef7e86ee67736dbee20fcd68c6a4b766232c231af9b98069fbc4c"} Jan 31 04:43:12 crc kubenswrapper[4832]: I0131 04:43:12.871107 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8b8049d7f5420437cdb2d9f9fd5802a6c4543fea1bd5c0cc533fa6a00794f66d"} Jan 31 04:43:12 crc kubenswrapper[4832]: W0131 04:43:12.908738 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Jan 31 04:43:12 crc kubenswrapper[4832]: E0131 04:43:12.909120 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:43:13 crc kubenswrapper[4832]: W0131 04:43:13.028298 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Jan 31 04:43:13 crc kubenswrapper[4832]: E0131 04:43:13.028454 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:43:13 crc kubenswrapper[4832]: W0131 04:43:13.164278 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Jan 31 04:43:13 crc kubenswrapper[4832]: E0131 04:43:13.164419 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:43:13 crc kubenswrapper[4832]: W0131 04:43:13.184016 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Jan 31 04:43:13 crc kubenswrapper[4832]: E0131 04:43:13.184214 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:43:13 crc kubenswrapper[4832]: E0131 04:43:13.189322 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="1.6s" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.457715 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.459024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.459059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.459069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.459095 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 04:43:13 crc kubenswrapper[4832]: E0131 04:43:13.459547 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.136:6443: connect: connection refused" node="crc" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.765141 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 04:43:13 crc kubenswrapper[4832]: E0131 04:43:13.767502 4832 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.777715 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.780771 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:29:45.883156173 +0000 UTC Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.875769 4832 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878" exitCode=0 Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.875850 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878"} Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.875968 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.877039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.877064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.877073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.878126 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7" exitCode=0 Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.878229 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.878240 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7"} Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.879523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.879594 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.879616 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.881287 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.881705 4832 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c" exitCode=0 Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.881768 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c"} Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.881820 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.883050 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.883100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.883121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.883098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.883198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.883224 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.884980 4832 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85" exitCode=0 Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.885054 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85"} Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.885393 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.887095 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.887131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.887144 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.888232 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771"} Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.888287 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7"} Jan 31 04:43:13 crc kubenswrapper[4832]: I0131 04:43:13.888305 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb"} Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.777543 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Jan 31 04:43:14 crc kubenswrapper[4832]: W0131 04:43:14.777522 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Jan 31 04:43:14 crc kubenswrapper[4832]: E0131 04:43:14.777624 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.780938 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 20:05:02.653245058 +0000 UTC Jan 31 04:43:14 crc kubenswrapper[4832]: E0131 04:43:14.790483 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="3.2s" Jan 31 04:43:14 crc kubenswrapper[4832]: E0131 04:43:14.879236 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.136:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fb72f29fbb024 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 04:43:11.775600676 +0000 UTC m=+0.724422411,LastTimestamp:2026-01-31 04:43:11.775600676 +0000 UTC m=+0.724422411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.895758 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882"} Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.895800 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1"} Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.895811 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f"} Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.895822 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140"} Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.898703 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"04bde28cc6a5118fb123864ccb17a4749aabb27e184261410d038beda918864f"} Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.898749 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.898760 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cd9b05a6852dfbb9f587c742d552bba3ba200137f6acbe99e84a0027b61f9140"} Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.898776 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"125d5fc534e456b0e7ea58b7c97d2bfe663627b9eb62c1b84cd9b6f8b160fa43"} Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.900405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.900433 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.900444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.902606 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"898c1f114ebe6e59fe285ebc316ae02920bdfa83d25af0c42a71146e1ffe0a33"} Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.902636 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.903438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.903461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.903470 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.908134 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84"} Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.908394 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.909957 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.910056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.910136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.911053 4832 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1" exitCode=0 Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.911134 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1"} Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.911263 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.911787 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.912375 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.912447 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:14 crc kubenswrapper[4832]: I0131 04:43:14.912501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.060708 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.061876 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.061909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.061920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.061943 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 04:43:15 crc kubenswrapper[4832]: E0131 04:43:15.062252 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.136:6443: connect: connection refused" node="crc" Jan 31 04:43:15 crc kubenswrapper[4832]: W0131 04:43:15.556664 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.136:6443: connect: connection refused Jan 31 04:43:15 crc kubenswrapper[4832]: E0131 04:43:15.556762 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.136:6443: connect: connection refused" logger="UnhandledError" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.781370 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:34:23.148135168 +0000 UTC Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.917954 4832 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196" exitCode=0 Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.918054 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196"} Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.918303 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.919768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.919818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.919835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.926131 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220"} Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.926213 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.926317 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.926351 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.926318 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.928198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.928261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.928277 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.928537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.928579 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.928593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.929593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.929615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.929625 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.930051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.930078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:15 crc kubenswrapper[4832]: I0131 04:43:15.930102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:16 crc kubenswrapper[4832]: I0131 04:43:16.782201 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:22:04.025470664 +0000 UTC Jan 31 04:43:16 crc kubenswrapper[4832]: I0131 04:43:16.934937 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51"} Jan 31 04:43:16 crc kubenswrapper[4832]: I0131 04:43:16.935052 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83"} Jan 31 04:43:16 crc kubenswrapper[4832]: I0131 04:43:16.935093 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f"} Jan 31 04:43:16 crc kubenswrapper[4832]: I0131 04:43:16.935120 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae"} Jan 31 04:43:16 crc kubenswrapper[4832]: I0131 04:43:16.935061 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:43:16 crc kubenswrapper[4832]: I0131 04:43:16.935201 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:16 crc kubenswrapper[4832]: I0131 04:43:16.935142 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:16 crc kubenswrapper[4832]: I0131 04:43:16.936696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:16 crc kubenswrapper[4832]: I0131 04:43:16.936748 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:16 crc kubenswrapper[4832]: I0131 04:43:16.936761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:16 crc kubenswrapper[4832]: I0131 04:43:16.936929 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:16 crc kubenswrapper[4832]: I0131 04:43:16.936992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:16 crc kubenswrapper[4832]: I0131 04:43:16.937018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:17 crc kubenswrapper[4832]: I0131 04:43:17.762816 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:17 crc kubenswrapper[4832]: I0131 04:43:17.782848 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 08:33:51.400580077 +0000 UTC Jan 31 04:43:17 crc kubenswrapper[4832]: I0131 04:43:17.942472 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00"} Jan 31 04:43:17 crc kubenswrapper[4832]: I0131 04:43:17.942618 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:17 crc kubenswrapper[4832]: I0131 04:43:17.942618 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:17 crc kubenswrapper[4832]: I0131 04:43:17.943945 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:17 crc kubenswrapper[4832]: I0131 04:43:17.943974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:17 crc kubenswrapper[4832]: I0131 04:43:17.943985 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:17 crc kubenswrapper[4832]: I0131 04:43:17.944042 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:17 crc kubenswrapper[4832]: I0131 04:43:17.944074 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:17 crc kubenswrapper[4832]: I0131 04:43:17.944101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:17 crc kubenswrapper[4832]: I0131 04:43:17.986277 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.112767 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.263129 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.264911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.264964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.264983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.265016 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.783955 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:34:35.409105925 +0000 UTC Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.833283 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.945373 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.945416 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.947014 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.947038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.947070 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.947088 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.947098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:18 crc kubenswrapper[4832]: I0131 04:43:18.947123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.016938 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.479376 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.479553 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.480837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.480868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.480879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.491620 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.784482 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:59:43.788199892 +0000 UTC Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.947840 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.947977 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.947977 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.948954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.949017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.949042 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.949414 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.949462 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.949487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.949428 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.949598 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:19 crc kubenswrapper[4832]: I0131 04:43:19.949628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:20 crc kubenswrapper[4832]: I0131 04:43:20.047348 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:20 crc kubenswrapper[4832]: I0131 04:43:20.784684 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:50:04.590296736 +0000 UTC Jan 31 04:43:20 crc kubenswrapper[4832]: I0131 04:43:20.950347 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:20 crc kubenswrapper[4832]: I0131 04:43:20.951459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:20 crc kubenswrapper[4832]: I0131 04:43:20.951545 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:20 crc kubenswrapper[4832]: I0131 04:43:20.951599 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:21 crc kubenswrapper[4832]: I0131 04:43:21.785678 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:42:26.108241527 +0000 UTC Jan 31 04:43:21 crc kubenswrapper[4832]: I0131 04:43:21.871426 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:21 crc kubenswrapper[4832]: I0131 04:43:21.951962 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:21 crc kubenswrapper[4832]: I0131 04:43:21.953031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:21 crc kubenswrapper[4832]: I0131 04:43:21.953059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:21 crc kubenswrapper[4832]: I0131 04:43:21.953069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:21 crc kubenswrapper[4832]: E0131 04:43:21.954121 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 04:43:21 crc kubenswrapper[4832]: I0131 04:43:21.979878 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 31 04:43:21 crc kubenswrapper[4832]: I0131 04:43:21.980112 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:21 crc kubenswrapper[4832]: I0131 04:43:21.981643 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:21 crc kubenswrapper[4832]: I0131 04:43:21.981702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:21 crc kubenswrapper[4832]: I0131 04:43:21.981715 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:22 crc kubenswrapper[4832]: I0131 04:43:22.786622 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 16:14:57.003873871 +0000 UTC Jan 31 04:43:23 crc kubenswrapper[4832]: I0131 04:43:23.596103 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:23 crc kubenswrapper[4832]: I0131 04:43:23.596287 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:23 crc kubenswrapper[4832]: I0131 04:43:23.597469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:23 crc kubenswrapper[4832]: I0131 04:43:23.597507 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:23 crc kubenswrapper[4832]: I0131 04:43:23.597519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:23 crc kubenswrapper[4832]: I0131 04:43:23.600506 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:23 crc kubenswrapper[4832]: I0131 04:43:23.786820 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:16:57.156610347 +0000 UTC Jan 31 04:43:23 crc kubenswrapper[4832]: I0131 04:43:23.958188 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:23 crc kubenswrapper[4832]: I0131 04:43:23.959804 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:23 crc kubenswrapper[4832]: I0131 04:43:23.959858 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:23 crc kubenswrapper[4832]: I0131 04:43:23.959877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:24 crc kubenswrapper[4832]: I0131 04:43:24.787535 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 19:36:54.086641795 +0000 UTC Jan 31 04:43:25 crc kubenswrapper[4832]: W0131 04:43:25.695829 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 31 04:43:25 crc kubenswrapper[4832]: I0131 04:43:25.696020 4832 trace.go:236] Trace[468172686]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 04:43:15.694) (total time: 10001ms): Jan 31 04:43:25 crc kubenswrapper[4832]: Trace[468172686]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (04:43:25.695) Jan 31 04:43:25 crc kubenswrapper[4832]: Trace[468172686]: [10.001921392s] [10.001921392s] END Jan 31 04:43:25 crc kubenswrapper[4832]: E0131 04:43:25.696063 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 31 04:43:25 crc kubenswrapper[4832]: W0131 04:43:25.755977 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 31 04:43:25 crc kubenswrapper[4832]: I0131 04:43:25.756098 4832 trace.go:236] Trace[1457638632]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 04:43:15.754) (total time: 10001ms): Jan 31 04:43:25 crc kubenswrapper[4832]: Trace[1457638632]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (04:43:25.755) Jan 31 04:43:25 crc kubenswrapper[4832]: Trace[1457638632]: [10.001152168s] [10.001152168s] END Jan 31 04:43:25 crc kubenswrapper[4832]: E0131 04:43:25.756131 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 31 04:43:25 crc kubenswrapper[4832]: I0131 04:43:25.778387 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 31 04:43:25 crc kubenswrapper[4832]: I0131 04:43:25.788627 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:48:39.877504262 +0000 UTC Jan 31 04:43:26 crc kubenswrapper[4832]: I0131 04:43:26.597069 4832 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 04:43:26 crc kubenswrapper[4832]: I0131 04:43:26.597187 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 04:43:26 crc kubenswrapper[4832]: I0131 04:43:26.788830 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:21:17.514596086 +0000 UTC Jan 31 04:43:26 crc kubenswrapper[4832]: I0131 04:43:26.968678 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 04:43:26 crc kubenswrapper[4832]: I0131 04:43:26.970609 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220" exitCode=255 Jan 31 04:43:26 crc kubenswrapper[4832]: I0131 04:43:26.970654 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220"} Jan 31 04:43:26 crc kubenswrapper[4832]: I0131 04:43:26.974249 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:26 crc kubenswrapper[4832]: I0131 04:43:26.975759 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:26 crc kubenswrapper[4832]: I0131 04:43:26.975824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:26 crc kubenswrapper[4832]: I0131 04:43:26.975837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:26 crc kubenswrapper[4832]: I0131 04:43:26.976596 4832 scope.go:117] "RemoveContainer" containerID="84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220" Jan 31 04:43:27 crc kubenswrapper[4832]: I0131 04:43:27.270691 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 04:43:27 crc kubenswrapper[4832]: I0131 04:43:27.271095 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 04:43:27 crc kubenswrapper[4832]: I0131 04:43:27.282216 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 04:43:27 crc kubenswrapper[4832]: I0131 04:43:27.282327 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 04:43:27 crc kubenswrapper[4832]: I0131 04:43:27.789873 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:33:24.689773911 +0000 UTC Jan 31 04:43:27 crc kubenswrapper[4832]: I0131 04:43:27.872760 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:27 crc kubenswrapper[4832]: I0131 04:43:27.975538 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 04:43:27 crc kubenswrapper[4832]: I0131 04:43:27.978192 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695"} Jan 31 04:43:27 crc kubenswrapper[4832]: I0131 04:43:27.978346 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:27 crc kubenswrapper[4832]: I0131 04:43:27.980063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:27 crc kubenswrapper[4832]: I0131 04:43:27.980104 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:27 crc kubenswrapper[4832]: I0131 04:43:27.980116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:28 crc kubenswrapper[4832]: I0131 04:43:28.790701 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:35:25.373214506 +0000 UTC Jan 31 04:43:28 crc kubenswrapper[4832]: I0131 04:43:28.981647 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:28 crc kubenswrapper[4832]: I0131 04:43:28.981720 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:28 crc kubenswrapper[4832]: I0131 04:43:28.982856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:28 crc kubenswrapper[4832]: I0131 04:43:28.982922 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:28 crc kubenswrapper[4832]: I0131 04:43:28.982947 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.017297 4832 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.026445 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.123202 4832 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.772149 4832 apiserver.go:52] "Watching apiserver" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.786411 4832 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.786738 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.787165 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.787282 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:29 crc kubenswrapper[4832]: E0131 04:43:29.787377 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.787404 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.787833 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.787882 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.788073 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:29 crc kubenswrapper[4832]: E0131 04:43:29.788170 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:29 crc kubenswrapper[4832]: E0131 04:43:29.788301 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.792239 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:19:56.785201557 +0000 UTC Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.793118 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.793499 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.796179 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.796232 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.796905 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.797084 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.797210 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.797900 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.799146 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.847419 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.869893 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.881403 4832 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.888317 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.905762 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.919322 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.938777 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.954722 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.969793 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.984716 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:29 crc kubenswrapper[4832]: I0131 04:43:29.991127 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:30 crc kubenswrapper[4832]: I0131 04:43:30.007908 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:30 crc kubenswrapper[4832]: I0131 04:43:30.017308 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 04:43:30 crc kubenswrapper[4832]: I0131 04:43:30.029204 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:30 crc kubenswrapper[4832]: I0131 04:43:30.042819 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:30 crc kubenswrapper[4832]: I0131 04:43:30.060612 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:30 crc kubenswrapper[4832]: I0131 04:43:30.076508 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:30 crc kubenswrapper[4832]: I0131 04:43:30.095387 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:30 crc kubenswrapper[4832]: I0131 04:43:30.792487 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:51:34.1029864 +0000 UTC Jan 31 04:43:30 crc kubenswrapper[4832]: I0131 04:43:30.858996 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:30 crc kubenswrapper[4832]: E0131 04:43:30.859600 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:31 crc kubenswrapper[4832]: E0131 04:43:31.000818 4832 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:31 crc kubenswrapper[4832]: I0131 04:43:31.793146 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:25:54.17973421 +0000 UTC Jan 31 04:43:31 crc kubenswrapper[4832]: I0131 04:43:31.859049 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:31 crc kubenswrapper[4832]: I0131 04:43:31.859130 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:31 crc kubenswrapper[4832]: E0131 04:43:31.859263 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:31 crc kubenswrapper[4832]: E0131 04:43:31.859455 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:31 crc kubenswrapper[4832]: I0131 04:43:31.880092 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:31 crc kubenswrapper[4832]: I0131 04:43:31.895490 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:31 crc kubenswrapper[4832]: I0131 04:43:31.913662 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:31 crc kubenswrapper[4832]: I0131 04:43:31.931503 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:31 crc kubenswrapper[4832]: I0131 04:43:31.951444 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:31 crc kubenswrapper[4832]: I0131 04:43:31.974008 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:31 crc kubenswrapper[4832]: I0131 04:43:31.992547 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.019905 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.039061 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.040309 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.043739 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.054896 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.070675 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.089065 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.106435 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.121512 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.137373 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.167655 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.188881 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.205682 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.222638 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.237338 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.258288 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.272077 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.273988 4832 trace.go:236] Trace[1731830275]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 04:43:19.826) (total time: 12447ms): Jan 31 04:43:32 crc kubenswrapper[4832]: Trace[1731830275]: ---"Objects listed" error: 12447ms (04:43:32.273) Jan 31 04:43:32 crc kubenswrapper[4832]: Trace[1731830275]: [12.44712695s] [12.44712695s] END Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.274042 4832 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.276600 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.276866 4832 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.276907 4832 trace.go:236] Trace[1006089869]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 04:43:19.537) (total time: 12739ms): Jan 31 04:43:32 crc kubenswrapper[4832]: Trace[1006089869]: ---"Objects listed" error: 12739ms (04:43:32.276) Jan 31 04:43:32 crc kubenswrapper[4832]: Trace[1006089869]: [12.739708448s] [12.739708448s] END Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.276941 4832 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.280220 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.280336 4832 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.292712 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378043 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378096 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378120 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378140 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378156 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378177 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378194 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378210 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378224 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378239 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378253 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378267 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378283 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378298 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378314 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378328 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378347 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378363 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378379 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378397 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378413 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378430 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378447 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378464 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378479 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378518 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378534 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378548 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378536 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378585 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378636 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378667 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378684 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378718 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378807 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378826 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378847 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378863 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378880 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378920 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378910 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.378935 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379011 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379036 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379055 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379075 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379093 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379111 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379127 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379145 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379171 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379189 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379205 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379221 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379236 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379251 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379300 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379314 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379330 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379352 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379369 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379385 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379400 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379415 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379435 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379452 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379471 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379490 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379508 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379523 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379539 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379557 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379617 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379634 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379651 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379667 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379683 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379698 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379714 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379729 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379745 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379762 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379778 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379793 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379810 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379826 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379843 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379859 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379875 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379890 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379907 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379923 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379939 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379956 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379972 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379989 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380004 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380022 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380061 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380077 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380094 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380110 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380127 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380142 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380159 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380176 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380192 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380209 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380226 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380241 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380259 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380274 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380291 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380318 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380340 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380355 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380372 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380388 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380404 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380426 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380441 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380459 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380479 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380495 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380511 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380529 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380545 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380578 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380594 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380612 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380631 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380648 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380664 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380681 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380697 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380714 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380732 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380766 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380784 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380802 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380825 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380842 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380859 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380874 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380890 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380905 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380922 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380940 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380983 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380999 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381015 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381034 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381052 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381069 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381088 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381107 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381123 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381140 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381157 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381173 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381190 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381208 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381227 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381243 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381259 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381278 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381299 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381318 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381337 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381357 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381375 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381392 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381408 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381426 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381444 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381462 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381481 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381498 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381517 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379166 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381537 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379212 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379206 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379330 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379418 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379451 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379543 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379621 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379670 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379755 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379799 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379850 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.379961 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380138 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381667 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380195 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380213 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380315 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380326 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380333 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380435 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380480 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380510 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380699 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380732 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380752 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381803 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381079 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.380982 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381274 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381348 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381519 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381525 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381796 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381141 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381866 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.381923 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:43:32.881905041 +0000 UTC m=+21.830726726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.382030 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.382036 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381939 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.382117 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.382297 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.382356 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.382407 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.382529 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.382617 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.382802 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.382916 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.383028 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.383028 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.383153 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.383262 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.383280 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.383288 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.383304 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.383314 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.383502 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.383543 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.383633 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.383840 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.384147 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.384447 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.384499 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.384472 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.384549 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.384718 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.384803 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.384852 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.385151 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.385254 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.385281 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.385418 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.385231 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.385646 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.385680 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.385694 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.385709 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.385810 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.386019 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.386184 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.386329 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.386385 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.386571 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.386638 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.386728 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.386735 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.387123 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.387202 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.387275 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.387280 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.387305 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.387601 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.387668 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.387791 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.387883 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.387947 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.387986 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.388267 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.388350 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.388772 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.389052 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.389184 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.390113 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.390334 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.390592 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.390909 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.391019 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.391165 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.391265 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.391691 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.391783 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.391984 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.392114 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.392542 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.392619 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.392970 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.393184 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.393387 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.393480 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.393817 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.393616 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.394128 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.394185 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.394217 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.394311 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.394332 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.394530 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.394838 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.394863 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.395075 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.395224 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.395227 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.395416 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.395263 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.395466 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.396188 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.396433 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.396747 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.396804 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.396821 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.381571 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.396882 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397010 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397153 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397194 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397223 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397222 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397253 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397280 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397303 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397330 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397356 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397369 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397381 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397405 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397428 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397452 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397543 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397585 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397609 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397611 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397635 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397643 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397658 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397690 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397692 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.397756 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397765 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.397829 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:32.897807719 +0000 UTC m=+21.846629424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397893 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397931 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397966 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397994 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.398026 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.397889 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.398057 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.398110 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.398143 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.398074 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.398182 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.398366 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.398591 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.398592 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.398818 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.399006 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.399187 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.399383 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.399803 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.399849 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.399965 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.399981 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.399979 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.400107 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.400122 4832 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.400337 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.400345 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:32.900312724 +0000 UTC m=+21.849134409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.400385 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.400442 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.400474 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.400520 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.400471 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.400506 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.400548 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.400793 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404157 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404196 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404480 4832 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404500 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404512 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404527 4832 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404538 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404549 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404607 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404618 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404629 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404639 4832 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404648 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404658 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404669 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404681 4832 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404691 4832 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404700 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404838 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404850 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404859 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404871 4832 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404883 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404894 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404909 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404919 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404929 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404939 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404951 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404960 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404970 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404980 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.404990 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405013 4832 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405023 4832 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405031 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405040 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405049 4832 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405058 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405066 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405075 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405084 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405093 4832 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405104 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405113 4832 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405123 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405132 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405142 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405152 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405161 4832 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405170 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405180 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405191 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405201 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405212 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405222 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405233 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405243 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405254 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405264 4832 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405274 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405284 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405295 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405305 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405316 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405325 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405336 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405347 4832 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405358 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405373 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405382 4832 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405391 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405313 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405400 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405672 4832 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405691 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405704 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405719 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405761 4832 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405775 4832 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405786 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405788 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405798 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405798 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.405816 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406030 4832 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406040 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406049 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406059 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406068 4832 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406077 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406086 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406098 4832 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406107 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406116 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406125 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406134 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406143 4832 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406152 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406161 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406169 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406179 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406188 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406199 4832 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406208 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406218 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406227 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406237 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406246 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406255 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406264 4832 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406272 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406282 4832 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406291 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406300 4832 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406309 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406319 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406346 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406356 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406365 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406374 4832 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406441 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406452 4832 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406462 4832 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406471 4832 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406480 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406489 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406498 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406508 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406517 4832 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406526 4832 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406535 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406544 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406553 4832 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406578 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406588 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406598 4832 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406607 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406616 4832 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406625 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406635 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406644 4832 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406654 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406663 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406673 4832 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406686 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406694 4832 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406704 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406759 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406768 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406780 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406789 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406798 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406806 4832 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406815 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406824 4832 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406832 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406842 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406851 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406860 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406869 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406878 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406887 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.406895 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.407348 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.407372 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.407821 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.407912 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.407942 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.408380 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.409391 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.410535 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.413234 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.414004 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.414023 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.414036 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.414114 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:32.914093198 +0000 UTC m=+21.862914883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.414277 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.414309 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.414318 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.414345 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:32.914338486 +0000 UTC m=+21.863160171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.414436 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.416628 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.416699 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.416879 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.417309 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.417413 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.418128 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.418362 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.418388 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.422307 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.424352 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.425943 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.428192 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.434451 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.438637 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.445707 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.452699 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.453017 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.461449 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.507987 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508030 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508075 4832 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508087 4832 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508096 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508105 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508113 4832 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508137 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508147 4832 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508155 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508164 4832 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508173 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508184 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508194 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508225 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508236 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508246 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508257 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508269 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508298 4832 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508313 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508325 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508336 4832 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508347 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508382 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508395 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508406 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508416 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508427 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508438 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508469 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508480 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508492 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508502 4832 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508512 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508523 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508552 4832 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508786 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.508847 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.511086 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 04:43:32 crc kubenswrapper[4832]: W0131 04:43:32.521099 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-a969ca9ea0879537aa611f780004283cff3f7079b23b1bde79f17da9f127ab7c WatchSource:0}: Error finding container a969ca9ea0879537aa611f780004283cff3f7079b23b1bde79f17da9f127ab7c: Status 404 returned error can't find the container with id a969ca9ea0879537aa611f780004283cff3f7079b23b1bde79f17da9f127ab7c Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.529142 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.540295 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 04:43:32 crc kubenswrapper[4832]: W0131 04:43:32.555146 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-5b50d558bf7b26f0dc4d61d632a40da016a8794e81ec07ef084a5078c7b2bdfc WatchSource:0}: Error finding container 5b50d558bf7b26f0dc4d61d632a40da016a8794e81ec07ef084a5078c7b2bdfc: Status 404 returned error can't find the container with id 5b50d558bf7b26f0dc4d61d632a40da016a8794e81ec07ef084a5078c7b2bdfc Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.793683 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:47:21.32140972 +0000 UTC Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.827769 4832 csr.go:261] certificate signing request csr-cvmh6 is approved, waiting to be issued Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.858932 4832 csr.go:257] certificate signing request csr-cvmh6 is issued Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.859009 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.859177 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.912483 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.912588 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.912626 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.912753 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.912753 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:43:33.912713991 +0000 UTC m=+22.861535676 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.912828 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:33.912819724 +0000 UTC m=+22.861641409 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.912844 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:43:32 crc kubenswrapper[4832]: E0131 04:43:32.912946 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:33.912922987 +0000 UTC m=+22.861744672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.996334 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5b50d558bf7b26f0dc4d61d632a40da016a8794e81ec07ef084a5078c7b2bdfc"} Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.997094 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fd57e60244db068f4a57b494638e9263298ca68ceb61faa55be8a6dc5c2538b1"} Jan 31 04:43:32 crc kubenswrapper[4832]: I0131 04:43:32.998195 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a969ca9ea0879537aa611f780004283cff3f7079b23b1bde79f17da9f127ab7c"} Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.013272 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.013318 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:33 crc kubenswrapper[4832]: E0131 04:43:33.013434 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:43:33 crc kubenswrapper[4832]: E0131 04:43:33.013452 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:43:33 crc kubenswrapper[4832]: E0131 04:43:33.013464 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:33 crc kubenswrapper[4832]: E0131 04:43:33.013521 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:34.013502801 +0000 UTC m=+22.962324486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:33 crc kubenswrapper[4832]: E0131 04:43:33.013532 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:43:33 crc kubenswrapper[4832]: E0131 04:43:33.013593 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:43:33 crc kubenswrapper[4832]: E0131 04:43:33.013611 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:33 crc kubenswrapper[4832]: E0131 04:43:33.013678 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:34.013658096 +0000 UTC m=+22.962479781 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.493159 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-bw458"] Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.493620 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-899xk"] Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.493880 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.494338 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-qk99s"] Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.494641 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qk99s" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.494641 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.501551 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.502157 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.502313 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.502756 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.502762 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.502914 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.503488 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.503544 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.503694 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.504537 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.505775 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.507543 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.507795 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.518723 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h2bq\" (UniqueName: \"kubernetes.io/projected/3c5f0a80-5a4f-4583-88d0-5e504d87d00a-kube-api-access-5h2bq\") pod \"machine-config-daemon-bw458\" (UID: \"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\") " pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.518776 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c5f0a80-5a4f-4583-88d0-5e504d87d00a-proxy-tls\") pod \"machine-config-daemon-bw458\" (UID: \"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\") " pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.518802 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9ea025bd-5921-4529-887b-d627fa8e245e-os-release\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.518832 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9ea025bd-5921-4529-887b-d627fa8e245e-cni-binary-copy\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.518858 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9ea025bd-5921-4529-887b-d627fa8e245e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.518905 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c35251d7-6c14-4d3b-94d9-afa0287c2894-hosts-file\") pod \"node-resolver-qk99s\" (UID: \"c35251d7-6c14-4d3b-94d9-afa0287c2894\") " pod="openshift-dns/node-resolver-qk99s" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.518928 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dknt\" (UniqueName: \"kubernetes.io/projected/c35251d7-6c14-4d3b-94d9-afa0287c2894-kube-api-access-8dknt\") pod \"node-resolver-qk99s\" (UID: \"c35251d7-6c14-4d3b-94d9-afa0287c2894\") " pod="openshift-dns/node-resolver-qk99s" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.518979 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3c5f0a80-5a4f-4583-88d0-5e504d87d00a-rootfs\") pod \"machine-config-daemon-bw458\" (UID: \"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\") " pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.519003 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c5f0a80-5a4f-4583-88d0-5e504d87d00a-mcd-auth-proxy-config\") pod \"machine-config-daemon-bw458\" (UID: \"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\") " pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.519022 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ea025bd-5921-4529-887b-d627fa8e245e-system-cni-dir\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.519040 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9ea025bd-5921-4529-887b-d627fa8e245e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.519059 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg6fs\" (UniqueName: \"kubernetes.io/projected/9ea025bd-5921-4529-887b-d627fa8e245e-kube-api-access-vg6fs\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.519164 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9ea025bd-5921-4529-887b-d627fa8e245e-cnibin\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.524695 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.536946 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.549171 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.558849 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.574947 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.586821 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.599020 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.613443 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.613456 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.618824 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.620125 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3c5f0a80-5a4f-4583-88d0-5e504d87d00a-rootfs\") pod \"machine-config-daemon-bw458\" (UID: \"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\") " pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.620159 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c5f0a80-5a4f-4583-88d0-5e504d87d00a-mcd-auth-proxy-config\") pod \"machine-config-daemon-bw458\" (UID: \"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\") " pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.620182 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ea025bd-5921-4529-887b-d627fa8e245e-system-cni-dir\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.620202 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9ea025bd-5921-4529-887b-d627fa8e245e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.620219 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg6fs\" (UniqueName: \"kubernetes.io/projected/9ea025bd-5921-4529-887b-d627fa8e245e-kube-api-access-vg6fs\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.620246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9ea025bd-5921-4529-887b-d627fa8e245e-cnibin\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.620263 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h2bq\" (UniqueName: \"kubernetes.io/projected/3c5f0a80-5a4f-4583-88d0-5e504d87d00a-kube-api-access-5h2bq\") pod \"machine-config-daemon-bw458\" (UID: \"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\") " pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.620285 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c5f0a80-5a4f-4583-88d0-5e504d87d00a-proxy-tls\") pod \"machine-config-daemon-bw458\" (UID: \"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\") " pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.620299 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9ea025bd-5921-4529-887b-d627fa8e245e-os-release\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.620291 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3c5f0a80-5a4f-4583-88d0-5e504d87d00a-rootfs\") pod \"machine-config-daemon-bw458\" (UID: \"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\") " pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.620322 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ea025bd-5921-4529-887b-d627fa8e245e-system-cni-dir\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.620317 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9ea025bd-5921-4529-887b-d627fa8e245e-cni-binary-copy\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.620394 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9ea025bd-5921-4529-887b-d627fa8e245e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.620436 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c35251d7-6c14-4d3b-94d9-afa0287c2894-hosts-file\") pod \"node-resolver-qk99s\" (UID: \"c35251d7-6c14-4d3b-94d9-afa0287c2894\") " pod="openshift-dns/node-resolver-qk99s" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.620461 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dknt\" (UniqueName: \"kubernetes.io/projected/c35251d7-6c14-4d3b-94d9-afa0287c2894-kube-api-access-8dknt\") pod \"node-resolver-qk99s\" (UID: \"c35251d7-6c14-4d3b-94d9-afa0287c2894\") " pod="openshift-dns/node-resolver-qk99s" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.621065 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c35251d7-6c14-4d3b-94d9-afa0287c2894-hosts-file\") pod \"node-resolver-qk99s\" (UID: \"c35251d7-6c14-4d3b-94d9-afa0287c2894\") " pod="openshift-dns/node-resolver-qk99s" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.621076 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9ea025bd-5921-4529-887b-d627fa8e245e-cnibin\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.621118 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c5f0a80-5a4f-4583-88d0-5e504d87d00a-mcd-auth-proxy-config\") pod \"machine-config-daemon-bw458\" (UID: \"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\") " pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.621118 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9ea025bd-5921-4529-887b-d627fa8e245e-cni-binary-copy\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.621281 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9ea025bd-5921-4529-887b-d627fa8e245e-os-release\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.621727 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9ea025bd-5921-4529-887b-d627fa8e245e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.622154 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9ea025bd-5921-4529-887b-d627fa8e245e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.625454 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.626993 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c5f0a80-5a4f-4583-88d0-5e504d87d00a-proxy-tls\") pod \"machine-config-daemon-bw458\" (UID: \"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\") " pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.641741 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.648281 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h2bq\" (UniqueName: \"kubernetes.io/projected/3c5f0a80-5a4f-4583-88d0-5e504d87d00a-kube-api-access-5h2bq\") pod \"machine-config-daemon-bw458\" (UID: \"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\") " pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.650492 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg6fs\" (UniqueName: \"kubernetes.io/projected/9ea025bd-5921-4529-887b-d627fa8e245e-kube-api-access-vg6fs\") pod \"multus-additional-cni-plugins-899xk\" (UID: \"9ea025bd-5921-4529-887b-d627fa8e245e\") " pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.652765 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dknt\" (UniqueName: \"kubernetes.io/projected/c35251d7-6c14-4d3b-94d9-afa0287c2894-kube-api-access-8dknt\") pod \"node-resolver-qk99s\" (UID: \"c35251d7-6c14-4d3b-94d9-afa0287c2894\") " pod="openshift-dns/node-resolver-qk99s" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.657270 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.669372 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.705773 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.722389 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.740222 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.767287 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.793428 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.794416 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 10:26:19.797862282 +0000 UTC Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.807192 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.813672 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qk99s" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.819902 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-899xk" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.822893 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: W0131 04:43:33.826184 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc35251d7_6c14_4d3b_94d9_afa0287c2894.slice/crio-adaf60a586d02ae207a923d43bc038b8f8e253b69e9b5011e735fa9085b7ef36 WatchSource:0}: Error finding container adaf60a586d02ae207a923d43bc038b8f8e253b69e9b5011e735fa9085b7ef36: Status 404 returned error can't find the container with id adaf60a586d02ae207a923d43bc038b8f8e253b69e9b5011e735fa9085b7ef36 Jan 31 04:43:33 crc kubenswrapper[4832]: W0131 04:43:33.833318 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ea025bd_5921_4529_887b_d627fa8e245e.slice/crio-97a77bae1e1065962ec2a86cf245118fbe0144dd7a8df5c8c25b6c4dc8502d02 WatchSource:0}: Error finding container 97a77bae1e1065962ec2a86cf245118fbe0144dd7a8df5c8c25b6c4dc8502d02: Status 404 returned error can't find the container with id 97a77bae1e1065962ec2a86cf245118fbe0144dd7a8df5c8c25b6c4dc8502d02 Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.835283 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.847494 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.858580 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.858923 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.858941 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:33 crc kubenswrapper[4832]: E0131 04:43:33.859071 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:33 crc kubenswrapper[4832]: E0131 04:43:33.859152 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.860777 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-31 04:38:32 +0000 UTC, rotation deadline is 2026-11-27 00:23:39.201969194 +0000 UTC Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.860813 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7195h40m5.341159277s for next certificate rotation Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.862898 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.863819 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.864653 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.865312 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.867121 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.867629 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.868756 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.869340 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.870364 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.870895 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.871460 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.872664 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.873217 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.874177 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.874770 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.875705 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.876286 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.876803 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.878552 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.879550 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.880092 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.881332 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.881999 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.890511 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.891250 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.892599 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.893290 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.893842 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.894938 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.895456 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.896414 4832 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.896516 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.898245 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.899460 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.900097 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.902715 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.906440 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.907132 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.908602 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.909410 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.910743 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.911409 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.920237 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.921073 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.922090 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:43:33 crc kubenswrapper[4832]: E0131 04:43:33.922238 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:43:35.922204633 +0000 UTC m=+24.871026488 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.922330 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.922846 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.923680 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:33 crc kubenswrapper[4832]: E0131 04:43:33.923821 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:43:33 crc kubenswrapper[4832]: E0131 04:43:33.923884 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:43:33 crc kubenswrapper[4832]: E0131 04:43:33.923915 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:35.923890894 +0000 UTC m=+24.872712569 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:43:33 crc kubenswrapper[4832]: E0131 04:43:33.923956 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:35.923931006 +0000 UTC m=+24.872752691 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.924687 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.925759 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.926523 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.927656 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.928212 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.928738 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.929805 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.930400 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.931386 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.931931 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7gvmz"] Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.933012 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-frk6z"] Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.933220 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.933318 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-frk6z" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.938213 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.941449 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.941682 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.942184 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.942355 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.942499 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.944895 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.944983 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.945096 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.969053 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.983954 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:33 crc kubenswrapper[4832]: I0131 04:43:33.995522 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.005295 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78"} Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.005342 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f"} Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.007733 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"9dba3634f3f0f151f03bad29a75c7defd9eb16af1c4c811f176242d8414a3ada"} Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.010005 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83"} Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.011022 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" event={"ID":"9ea025bd-5921-4529-887b-d627fa8e245e","Type":"ContainerStarted","Data":"97a77bae1e1065962ec2a86cf245118fbe0144dd7a8df5c8c25b6c4dc8502d02"} Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.012254 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qk99s" event={"ID":"c35251d7-6c14-4d3b-94d9-afa0287c2894","Type":"ContainerStarted","Data":"adaf60a586d02ae207a923d43bc038b8f8e253b69e9b5011e735fa9085b7ef36"} Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.014309 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: E0131 04:43:34.018115 4832 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.024490 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-env-overrides\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.024542 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-multus-cni-dir\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.024766 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-var-lib-cni-bin\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.024823 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-var-lib-kubelet\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.024853 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-run-multus-certs\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.024900 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-494kp\" (UniqueName: \"kubernetes.io/projected/df4dafae-fa72-4f03-8531-93538336b0cd-kube-api-access-494kp\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.024933 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-slash\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.024959 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-ovnkube-config\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025007 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-etc-openvswitch\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025047 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-kubelet\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025074 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-run-netns\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025093 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-ovn\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025111 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e089fa33-e032-4755-8b7e-262adfecc82f-ovn-node-metrics-cert\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025135 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/df4dafae-fa72-4f03-8531-93538336b0cd-cni-binary-copy\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025154 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-run-netns\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025177 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/df4dafae-fa72-4f03-8531-93538336b0cd-multus-daemon-config\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025208 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-run-ovn-kubernetes\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025228 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-os-release\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025245 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-hostroot\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025289 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-systemd\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025308 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025345 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-system-cni-dir\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025366 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-openvswitch\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025393 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-var-lib-cni-multus\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025422 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-systemd-units\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025441 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-log-socket\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025459 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-ovnkube-script-lib\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025478 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb97j\" (UniqueName: \"kubernetes.io/projected/e089fa33-e032-4755-8b7e-262adfecc82f-kube-api-access-sb97j\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025509 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-run-k8s-cni-cncf-io\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025530 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-cni-bin\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025553 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-cni-netd\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025590 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-cnibin\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025610 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-etc-kubernetes\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025659 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-multus-socket-dir-parent\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025690 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-var-lib-openvswitch\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025718 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025745 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-multus-conf-dir\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025770 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.025792 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-node-log\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: E0131 04:43:34.027034 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:43:34 crc kubenswrapper[4832]: E0131 04:43:34.027062 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:43:34 crc kubenswrapper[4832]: E0131 04:43:34.027077 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:34 crc kubenswrapper[4832]: E0131 04:43:34.027128 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:36.027110958 +0000 UTC m=+24.975932643 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:34 crc kubenswrapper[4832]: E0131 04:43:34.027520 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:43:34 crc kubenswrapper[4832]: E0131 04:43:34.027537 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:43:34 crc kubenswrapper[4832]: E0131 04:43:34.027546 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:34 crc kubenswrapper[4832]: E0131 04:43:34.027593 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:36.027586022 +0000 UTC m=+24.976407707 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.036345 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.050836 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.062872 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.081444 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.093263 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.102235 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.113723 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127025 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-node-log\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127101 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-var-lib-kubelet\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127129 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-run-multus-certs\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127155 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-env-overrides\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127180 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-multus-cni-dir\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127198 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-node-log\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127228 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-var-lib-kubelet\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127208 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-var-lib-cni-bin\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127258 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-var-lib-cni-bin\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127294 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-494kp\" (UniqueName: \"kubernetes.io/projected/df4dafae-fa72-4f03-8531-93538336b0cd-kube-api-access-494kp\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127321 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-slash\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-ovnkube-config\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127367 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-etc-openvswitch\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127372 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-multus-cni-dir\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127384 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e089fa33-e032-4755-8b7e-262adfecc82f-ovn-node-metrics-cert\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127292 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127414 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-run-multus-certs\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127496 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-kubelet\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127516 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-run-netns\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127540 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-ovn\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127614 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-slash\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127645 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-run-netns\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127698 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/df4dafae-fa72-4f03-8531-93538336b0cd-cni-binary-copy\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127727 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-run-netns\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127762 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/df4dafae-fa72-4f03-8531-93538336b0cd-multus-daemon-config\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127836 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-env-overrides\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127875 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-etc-openvswitch\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127897 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-ovn\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.127921 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-kubelet\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.128048 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-run-netns\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.128441 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/df4dafae-fa72-4f03-8531-93538336b0cd-cni-binary-copy\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.128672 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-run-ovn-kubernetes\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.128710 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/df4dafae-fa72-4f03-8531-93538336b0cd-multus-daemon-config\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.128735 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-os-release\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.128762 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-run-ovn-kubernetes\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.128802 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-hostroot\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.128892 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-os-release\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.128902 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-hostroot\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.128940 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-systemd\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.128975 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129001 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-systemd\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129042 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129068 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-system-cni-dir\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129100 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-openvswitch\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129122 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-var-lib-cni-multus\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129145 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-systemd-units\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129169 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-log-socket\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129196 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-ovnkube-script-lib\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129218 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-var-lib-cni-multus\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129229 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb97j\" (UniqueName: \"kubernetes.io/projected/e089fa33-e032-4755-8b7e-262adfecc82f-kube-api-access-sb97j\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129258 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-run-k8s-cni-cncf-io\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129267 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-ovnkube-config\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129297 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-system-cni-dir\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129312 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-etc-kubernetes\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129347 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-openvswitch\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129282 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-etc-kubernetes\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129391 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-systemd-units\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129463 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-log-socket\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129501 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-host-run-k8s-cni-cncf-io\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129548 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-cni-bin\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129606 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-cni-bin\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129636 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-cni-netd\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129663 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-cnibin\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129672 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-cni-netd\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129692 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-multus-socket-dir-parent\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129718 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-cnibin\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129764 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-var-lib-openvswitch\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129797 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-multus-conf-dir\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129805 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-ovnkube-script-lib\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129870 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-var-lib-openvswitch\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129889 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-multus-conf-dir\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.129914 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/df4dafae-fa72-4f03-8531-93538336b0cd-multus-socket-dir-parent\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.131160 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e089fa33-e032-4755-8b7e-262adfecc82f-ovn-node-metrics-cert\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.143017 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.148205 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb97j\" (UniqueName: \"kubernetes.io/projected/e089fa33-e032-4755-8b7e-262adfecc82f-kube-api-access-sb97j\") pod \"ovnkube-node-7gvmz\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.152536 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-494kp\" (UniqueName: \"kubernetes.io/projected/df4dafae-fa72-4f03-8531-93538336b0cd-kube-api-access-494kp\") pod \"multus-frk6z\" (UID: \"df4dafae-fa72-4f03-8531-93538336b0cd\") " pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.156968 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.168551 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.179848 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.194408 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.213375 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.228674 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.239644 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.249810 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.252119 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.261592 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-frk6z" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.261574 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: W0131 04:43:34.266049 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode089fa33_e032_4755_8b7e_262adfecc82f.slice/crio-bd634a7c0e4d4e8df72dd5cbea912c13895b43a946a14c4cf92c1eada836ab06 WatchSource:0}: Error finding container bd634a7c0e4d4e8df72dd5cbea912c13895b43a946a14c4cf92c1eada836ab06: Status 404 returned error can't find the container with id bd634a7c0e4d4e8df72dd5cbea912c13895b43a946a14c4cf92c1eada836ab06 Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.275883 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: W0131 04:43:34.277800 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf4dafae_fa72_4f03_8531_93538336b0cd.slice/crio-fa86312818dfdf214324e9ae57dc25e88c8106366acb698757f2fd8c196477d4 WatchSource:0}: Error finding container fa86312818dfdf214324e9ae57dc25e88c8106366acb698757f2fd8c196477d4: Status 404 returned error can't find the container with id fa86312818dfdf214324e9ae57dc25e88c8106366acb698757f2fd8c196477d4 Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.290734 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.305291 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.318343 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.329762 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.350048 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.795073 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:06:37.175263886 +0000 UTC Jan 31 04:43:34 crc kubenswrapper[4832]: I0131 04:43:34.858885 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:34 crc kubenswrapper[4832]: E0131 04:43:34.859073 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.016042 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-frk6z" event={"ID":"df4dafae-fa72-4f03-8531-93538336b0cd","Type":"ContainerStarted","Data":"fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1"} Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.016108 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-frk6z" event={"ID":"df4dafae-fa72-4f03-8531-93538336b0cd","Type":"ContainerStarted","Data":"fa86312818dfdf214324e9ae57dc25e88c8106366acb698757f2fd8c196477d4"} Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.018729 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ea025bd-5921-4529-887b-d627fa8e245e" containerID="e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f" exitCode=0 Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.018876 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" event={"ID":"9ea025bd-5921-4529-887b-d627fa8e245e","Type":"ContainerDied","Data":"e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f"} Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.021155 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qk99s" event={"ID":"c35251d7-6c14-4d3b-94d9-afa0287c2894","Type":"ContainerStarted","Data":"23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a"} Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.022646 4832 generic.go:334] "Generic (PLEG): container finished" podID="e089fa33-e032-4755-8b7e-262adfecc82f" containerID="e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765" exitCode=0 Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.022702 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerDied","Data":"e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765"} Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.022746 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerStarted","Data":"bd634a7c0e4d4e8df72dd5cbea912c13895b43a946a14c4cf92c1eada836ab06"} Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.027548 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be"} Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.027641 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720"} Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.042329 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.061680 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.077583 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.094503 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.112149 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.126957 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.137922 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.150726 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.164984 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.180038 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.191028 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.206447 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.223504 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.237741 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.260637 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.280775 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.298981 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.309404 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.325034 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.340926 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.356875 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.369593 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.382074 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.398018 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.411601 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.423250 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.438367 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.452171 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:35Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.795844 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:25:58.411869558 +0000 UTC Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.859160 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.859185 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:35 crc kubenswrapper[4832]: E0131 04:43:35.859316 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:35 crc kubenswrapper[4832]: E0131 04:43:35.859482 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.947816 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:43:35 crc kubenswrapper[4832]: E0131 04:43:35.951038 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:43:39.950999665 +0000 UTC m=+28.899821340 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.951139 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:35 crc kubenswrapper[4832]: I0131 04:43:35.951190 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:35 crc kubenswrapper[4832]: E0131 04:43:35.951279 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:43:35 crc kubenswrapper[4832]: E0131 04:43:35.951382 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:39.951356145 +0000 UTC m=+28.900177830 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:43:35 crc kubenswrapper[4832]: E0131 04:43:35.951380 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:43:35 crc kubenswrapper[4832]: E0131 04:43:35.951717 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:39.951670524 +0000 UTC m=+28.900492209 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.032664 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0"} Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.034775 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" event={"ID":"9ea025bd-5921-4529-887b-d627fa8e245e","Type":"ContainerStarted","Data":"f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334"} Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.037867 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerStarted","Data":"ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2"} Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.037913 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerStarted","Data":"05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5"} Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.037932 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerStarted","Data":"504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a"} Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.052212 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.052265 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:36 crc kubenswrapper[4832]: E0131 04:43:36.052412 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:43:36 crc kubenswrapper[4832]: E0131 04:43:36.052430 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:43:36 crc kubenswrapper[4832]: E0131 04:43:36.052441 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:36 crc kubenswrapper[4832]: E0131 04:43:36.052477 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:43:36 crc kubenswrapper[4832]: E0131 04:43:36.052520 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:43:36 crc kubenswrapper[4832]: E0131 04:43:36.052538 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:36 crc kubenswrapper[4832]: E0131 04:43:36.052497 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:40.052481105 +0000 UTC m=+29.001302790 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:36 crc kubenswrapper[4832]: E0131 04:43:36.052638 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:40.052605099 +0000 UTC m=+29.001426824 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.075518 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.096798 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.118956 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.135103 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.153590 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.165655 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.176005 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.190510 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.205509 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.217359 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.235992 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.253790 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.264628 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.277981 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.290906 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.304588 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.320471 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.339620 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.354608 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.367323 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.380982 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.394258 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.405861 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.422012 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.436153 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.450485 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.470702 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.503134 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:36Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.796621 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:13:03.387800934 +0000 UTC Jan 31 04:43:36 crc kubenswrapper[4832]: I0131 04:43:36.858745 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:36 crc kubenswrapper[4832]: E0131 04:43:36.858883 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.044222 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ea025bd-5921-4529-887b-d627fa8e245e" containerID="f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334" exitCode=0 Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.044339 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" event={"ID":"9ea025bd-5921-4529-887b-d627fa8e245e","Type":"ContainerDied","Data":"f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334"} Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.049750 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerStarted","Data":"e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a"} Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.049819 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerStarted","Data":"4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287"} Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.049834 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerStarted","Data":"250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114"} Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.069817 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.086959 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.105852 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.130323 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.150601 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.183192 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.211446 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.229079 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.242611 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.262895 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.275670 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.285917 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.300490 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.319826 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.593909 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-nspv9"] Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.594403 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nspv9" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.597192 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.597994 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.598124 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.598594 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.613063 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.627530 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.647504 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.663773 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.670788 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq9f9\" (UniqueName: \"kubernetes.io/projected/830c4bc3-45df-4e7b-a494-dec77c4318ac-kube-api-access-tq9f9\") pod \"node-ca-nspv9\" (UID: \"830c4bc3-45df-4e7b-a494-dec77c4318ac\") " pod="openshift-image-registry/node-ca-nspv9" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.670860 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/830c4bc3-45df-4e7b-a494-dec77c4318ac-host\") pod \"node-ca-nspv9\" (UID: \"830c4bc3-45df-4e7b-a494-dec77c4318ac\") " pod="openshift-image-registry/node-ca-nspv9" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.670882 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/830c4bc3-45df-4e7b-a494-dec77c4318ac-serviceca\") pod \"node-ca-nspv9\" (UID: \"830c4bc3-45df-4e7b-a494-dec77c4318ac\") " pod="openshift-image-registry/node-ca-nspv9" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.679312 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.693906 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.712052 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.725334 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.737818 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.751153 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.765101 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.767278 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.771665 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/830c4bc3-45df-4e7b-a494-dec77c4318ac-host\") pod \"node-ca-nspv9\" (UID: \"830c4bc3-45df-4e7b-a494-dec77c4318ac\") " pod="openshift-image-registry/node-ca-nspv9" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.771710 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/830c4bc3-45df-4e7b-a494-dec77c4318ac-serviceca\") pod \"node-ca-nspv9\" (UID: \"830c4bc3-45df-4e7b-a494-dec77c4318ac\") " pod="openshift-image-registry/node-ca-nspv9" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.771759 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq9f9\" (UniqueName: \"kubernetes.io/projected/830c4bc3-45df-4e7b-a494-dec77c4318ac-kube-api-access-tq9f9\") pod \"node-ca-nspv9\" (UID: \"830c4bc3-45df-4e7b-a494-dec77c4318ac\") " pod="openshift-image-registry/node-ca-nspv9" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.771794 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/830c4bc3-45df-4e7b-a494-dec77c4318ac-host\") pod \"node-ca-nspv9\" (UID: \"830c4bc3-45df-4e7b-a494-dec77c4318ac\") " pod="openshift-image-registry/node-ca-nspv9" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.772993 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/830c4bc3-45df-4e7b-a494-dec77c4318ac-serviceca\") pod \"node-ca-nspv9\" (UID: \"830c4bc3-45df-4e7b-a494-dec77c4318ac\") " pod="openshift-image-registry/node-ca-nspv9" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.777151 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.791241 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.793979 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq9f9\" (UniqueName: \"kubernetes.io/projected/830c4bc3-45df-4e7b-a494-dec77c4318ac-kube-api-access-tq9f9\") pod \"node-ca-nspv9\" (UID: \"830c4bc3-45df-4e7b-a494-dec77c4318ac\") " pod="openshift-image-registry/node-ca-nspv9" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.797299 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:44:15.242904079 +0000 UTC Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.805748 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.828073 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.841863 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.856053 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.858992 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.859125 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:37 crc kubenswrapper[4832]: E0131 04:43:37.859123 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:37 crc kubenswrapper[4832]: E0131 04:43:37.859443 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.870781 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.884313 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.897877 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.909624 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nspv9" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.911749 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.928129 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.941035 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.958850 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:37 crc kubenswrapper[4832]: I0131 04:43:37.975990 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:37Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.005709 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.019452 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.034161 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.054253 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nspv9" event={"ID":"830c4bc3-45df-4e7b-a494-dec77c4318ac","Type":"ContainerStarted","Data":"ecd4533152053e02e342b6ff651fb458add2cf9dcb94683c37f253c3e70397af"} Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.057197 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ea025bd-5921-4529-887b-d627fa8e245e" containerID="5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3" exitCode=0 Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.057248 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" event={"ID":"9ea025bd-5921-4529-887b-d627fa8e245e","Type":"ContainerDied","Data":"5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3"} Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.071651 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.104742 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.142937 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.186779 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.224792 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.266664 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.310292 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.343072 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.391654 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.427339 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.465886 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.514112 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.542869 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.581661 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.629755 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.663905 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.676688 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.678521 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.678568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.678578 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.678677 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.704309 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.737260 4832 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.737755 4832 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.739243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.739293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.739309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.739331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.739348 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:38Z","lastTransitionTime":"2026-01-31T04:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:38 crc kubenswrapper[4832]: E0131 04:43:38.760938 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.765089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.765128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.765138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.765155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.765164 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:38Z","lastTransitionTime":"2026-01-31T04:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:38 crc kubenswrapper[4832]: E0131 04:43:38.778273 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.781650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.781680 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.781689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.781705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.781716 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:38Z","lastTransitionTime":"2026-01-31T04:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:38 crc kubenswrapper[4832]: E0131 04:43:38.796307 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.797881 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 23:34:32.130603286 +0000 UTC Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.799596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.799633 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.799646 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.799664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.799677 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:38Z","lastTransitionTime":"2026-01-31T04:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:38 crc kubenswrapper[4832]: E0131 04:43:38.814587 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.818695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.818744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.818758 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.818776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.818788 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:38Z","lastTransitionTime":"2026-01-31T04:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:38 crc kubenswrapper[4832]: E0131 04:43:38.830249 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:38Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:38 crc kubenswrapper[4832]: E0131 04:43:38.830388 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.832776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.832827 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.832848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.832871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.832887 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:38Z","lastTransitionTime":"2026-01-31T04:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.858721 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:38 crc kubenswrapper[4832]: E0131 04:43:38.858867 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.935584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.935629 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.935648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.935667 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:38 crc kubenswrapper[4832]: I0131 04:43:38.935677 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:38Z","lastTransitionTime":"2026-01-31T04:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.038624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.038669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.038679 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.038696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.038711 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:39Z","lastTransitionTime":"2026-01-31T04:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.064522 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ea025bd-5921-4529-887b-d627fa8e245e" containerID="c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca" exitCode=0 Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.064596 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" event={"ID":"9ea025bd-5921-4529-887b-d627fa8e245e","Type":"ContainerDied","Data":"c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca"} Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.069115 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nspv9" event={"ID":"830c4bc3-45df-4e7b-a494-dec77c4318ac","Type":"ContainerStarted","Data":"331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700"} Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.086757 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.102030 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.115319 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.129895 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.142545 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.147064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.147130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.147146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.147168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.147185 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:39Z","lastTransitionTime":"2026-01-31T04:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.164693 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.179599 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.192942 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.209761 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.230129 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.243495 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.251532 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.251613 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.251632 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.251659 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.251676 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:39Z","lastTransitionTime":"2026-01-31T04:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.259456 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.273544 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.309156 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.352620 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.354131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.354192 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.354212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.354238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.354253 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:39Z","lastTransitionTime":"2026-01-31T04:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.386181 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.424083 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.457078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.457378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.457397 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.457418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.457431 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:39Z","lastTransitionTime":"2026-01-31T04:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.463782 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.505636 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.544943 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.560888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.561195 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.561283 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.561383 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.561503 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:39Z","lastTransitionTime":"2026-01-31T04:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.587934 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.630850 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.664833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.664887 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.664900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.664918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.664933 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:39Z","lastTransitionTime":"2026-01-31T04:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.667376 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.704637 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.749893 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.767725 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.767767 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.767777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.767793 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.767806 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:39Z","lastTransitionTime":"2026-01-31T04:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.784241 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.797973 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:15:39.614058616 +0000 UTC Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.824076 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.858387 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.858398 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:39 crc kubenswrapper[4832]: E0131 04:43:39.858528 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:39 crc kubenswrapper[4832]: E0131 04:43:39.858590 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.868036 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.869597 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.869622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.869630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.869642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.869651 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:39Z","lastTransitionTime":"2026-01-31T04:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.911798 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.953066 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:39Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.972131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.972165 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.972175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.972191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.972201 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:39Z","lastTransitionTime":"2026-01-31T04:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:39 crc kubenswrapper[4832]: I0131 04:43:39.999708 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:43:40 crc kubenswrapper[4832]: E0131 04:43:39.999896 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:43:47.999872413 +0000 UTC m=+36.948694148 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.000193 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.000290 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:40 crc kubenswrapper[4832]: E0131 04:43:40.000420 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:43:40 crc kubenswrapper[4832]: E0131 04:43:40.000474 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:48.000462391 +0000 UTC m=+36.949284076 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:43:40 crc kubenswrapper[4832]: E0131 04:43:40.000766 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:43:40 crc kubenswrapper[4832]: E0131 04:43:40.000814 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:48.000805011 +0000 UTC m=+36.949626696 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.075984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.076039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.076058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.076086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.076105 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:40Z","lastTransitionTime":"2026-01-31T04:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.077853 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" event={"ID":"9ea025bd-5921-4529-887b-d627fa8e245e","Type":"ContainerStarted","Data":"fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1"} Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.083486 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerStarted","Data":"18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4"} Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.101164 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.101218 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:40 crc kubenswrapper[4832]: E0131 04:43:40.101298 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:43:40 crc kubenswrapper[4832]: E0131 04:43:40.101319 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:43:40 crc kubenswrapper[4832]: E0131 04:43:40.101327 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:43:40 crc kubenswrapper[4832]: E0131 04:43:40.101333 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:40 crc kubenswrapper[4832]: E0131 04:43:40.101346 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:43:40 crc kubenswrapper[4832]: E0131 04:43:40.101361 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:40 crc kubenswrapper[4832]: E0131 04:43:40.101386 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:48.101369835 +0000 UTC m=+37.050191520 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:40 crc kubenswrapper[4832]: E0131 04:43:40.101406 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:48.101393616 +0000 UTC m=+37.050215311 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.105138 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.116835 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.138827 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.153854 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.169011 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.179313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.179356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.179367 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.179386 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.179401 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:40Z","lastTransitionTime":"2026-01-31T04:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.189308 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.229900 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.269010 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.283383 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.283449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.283468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.283496 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.283519 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:40Z","lastTransitionTime":"2026-01-31T04:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.309916 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.352541 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.387179 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.387237 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.387248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.387270 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.387282 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:40Z","lastTransitionTime":"2026-01-31T04:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.391159 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.431931 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.474619 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.489861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.489931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.489955 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.489992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.490015 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:40Z","lastTransitionTime":"2026-01-31T04:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.511328 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.553309 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.593292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.593355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.593374 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.593398 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.593415 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:40Z","lastTransitionTime":"2026-01-31T04:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.696446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.696476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.696484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.696513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.696523 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:40Z","lastTransitionTime":"2026-01-31T04:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.798102 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:36:40.652717964 +0000 UTC Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.799864 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.799918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.799936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.799963 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.799980 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:40Z","lastTransitionTime":"2026-01-31T04:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.858825 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:40 crc kubenswrapper[4832]: E0131 04:43:40.859065 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.903411 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.903477 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.903494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.903524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:40 crc kubenswrapper[4832]: I0131 04:43:40.903542 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:40Z","lastTransitionTime":"2026-01-31T04:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.007503 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.007602 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.007622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.007653 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.007676 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:41Z","lastTransitionTime":"2026-01-31T04:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.093027 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ea025bd-5921-4529-887b-d627fa8e245e" containerID="fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1" exitCode=0 Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.093157 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" event={"ID":"9ea025bd-5921-4529-887b-d627fa8e245e","Type":"ContainerDied","Data":"fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1"} Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.113676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.113742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.113762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.113796 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.113817 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:41Z","lastTransitionTime":"2026-01-31T04:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.118246 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.143011 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.158798 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.179132 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.191831 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.220732 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.224582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.224775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.225196 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.225405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.225552 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:41Z","lastTransitionTime":"2026-01-31T04:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.244670 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.263914 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.290252 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.309485 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.326733 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.328781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.328815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.328825 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.328844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.328869 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:41Z","lastTransitionTime":"2026-01-31T04:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.341844 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.361364 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.375532 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.391272 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.431547 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.431601 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.431614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.431632 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.431643 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:41Z","lastTransitionTime":"2026-01-31T04:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.535023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.535080 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.535095 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.535115 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.535128 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:41Z","lastTransitionTime":"2026-01-31T04:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.608031 4832 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.637833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.637911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.637929 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.637948 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.637960 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:41Z","lastTransitionTime":"2026-01-31T04:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.739855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.739893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.739903 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.739917 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.739928 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:41Z","lastTransitionTime":"2026-01-31T04:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.798722 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 20:56:45.721663159 +0000 UTC Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.841957 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.841999 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.842010 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.842027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.842040 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:41Z","lastTransitionTime":"2026-01-31T04:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.859624 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:41 crc kubenswrapper[4832]: E0131 04:43:41.859948 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.860151 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:41 crc kubenswrapper[4832]: E0131 04:43:41.860335 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.880856 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.895069 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.912694 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.939436 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.944332 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.944411 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.944429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.944453 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.944469 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:41Z","lastTransitionTime":"2026-01-31T04:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.963812 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:41 crc kubenswrapper[4832]: I0131 04:43:41.986932 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.008148 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.023111 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.044665 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.047578 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.047601 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.047612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.047628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.047642 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:42Z","lastTransitionTime":"2026-01-31T04:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.073228 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.100026 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.107005 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerStarted","Data":"fa1e4560305c2a311b054a4ea0347eaaf42a14868de5f41a23246c535c31b2de"} Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.107738 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.107808 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.115029 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ea025bd-5921-4529-887b-d627fa8e245e" containerID="7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8" exitCode=0 Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.115098 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" event={"ID":"9ea025bd-5921-4529-887b-d627fa8e245e","Type":"ContainerDied","Data":"7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8"} Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.115627 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.139012 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.161798 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.179677 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.193147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.193198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.193216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.193240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.193256 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:42Z","lastTransitionTime":"2026-01-31T04:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.195357 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.195970 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.196215 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.212477 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.235712 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.253148 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.274375 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.289397 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.296240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.296279 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.296294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.296311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.296325 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:42Z","lastTransitionTime":"2026-01-31T04:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.313486 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1e4560305c2a311b054a4ea0347eaaf42a14868de5f41a23246c535c31b2de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.331714 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.350545 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.370083 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.391061 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.399611 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.399668 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.399681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.399700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.399712 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:42Z","lastTransitionTime":"2026-01-31T04:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.409040 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.427894 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.440967 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.459714 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.480923 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.494707 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.503619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.503675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.503688 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.503710 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.503728 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:42Z","lastTransitionTime":"2026-01-31T04:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.513389 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.532732 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.555232 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.588323 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.607124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.607209 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.607230 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.607266 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.607289 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:42Z","lastTransitionTime":"2026-01-31T04:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.630243 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.668965 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.704606 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.709131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.709187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.709204 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.709224 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.709236 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:42Z","lastTransitionTime":"2026-01-31T04:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.748486 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.781681 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.799342 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 09:59:18.755455142 +0000 UTC Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.810960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.810999 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.811012 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.811034 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.811048 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:42Z","lastTransitionTime":"2026-01-31T04:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.828923 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1e4560305c2a311b054a4ea0347eaaf42a14868de5f41a23246c535c31b2de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.859031 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:42 crc kubenswrapper[4832]: E0131 04:43:42.859182 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.864735 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.906490 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.913411 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.913448 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.913458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.913473 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.913487 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:42Z","lastTransitionTime":"2026-01-31T04:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:42 crc kubenswrapper[4832]: I0131 04:43:42.948960 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.016768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.016824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.016840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.016862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.016878 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:43Z","lastTransitionTime":"2026-01-31T04:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.125707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.125762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.125777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.125802 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.125818 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:43Z","lastTransitionTime":"2026-01-31T04:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.133313 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" event={"ID":"9ea025bd-5921-4529-887b-d627fa8e245e","Type":"ContainerStarted","Data":"a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13"} Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.133435 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.159662 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.175731 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.190502 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.204816 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.219829 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.229320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.229356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.229365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.229380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.229391 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:43Z","lastTransitionTime":"2026-01-31T04:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.242018 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1e4560305c2a311b054a4ea0347eaaf42a14868de5f41a23246c535c31b2de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.258984 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.270657 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.306170 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.332468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.332501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.332509 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.332524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.332534 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:43Z","lastTransitionTime":"2026-01-31T04:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.346583 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.384258 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.425792 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.434827 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.434880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.434897 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.434919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.434935 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:43Z","lastTransitionTime":"2026-01-31T04:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.469029 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.507455 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.538118 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.538178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.538195 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.538218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.538235 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:43Z","lastTransitionTime":"2026-01-31T04:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.547072 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:43Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.641861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.641942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.641967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.642002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.642026 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:43Z","lastTransitionTime":"2026-01-31T04:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.745798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.745846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.745858 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.745877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.745890 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:43Z","lastTransitionTime":"2026-01-31T04:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.799503 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:49:04.095752574 +0000 UTC Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.849219 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.849268 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.849281 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.849304 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.849317 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:43Z","lastTransitionTime":"2026-01-31T04:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.858845 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.858868 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:43 crc kubenswrapper[4832]: E0131 04:43:43.859043 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:43 crc kubenswrapper[4832]: E0131 04:43:43.859192 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.952403 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.952434 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.952441 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.952457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:43 crc kubenswrapper[4832]: I0131 04:43:43.952469 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:43Z","lastTransitionTime":"2026-01-31T04:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.055037 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.055090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.055103 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.055125 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.055140 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:44Z","lastTransitionTime":"2026-01-31T04:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.137053 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.158618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.158696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.158723 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.158756 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.158780 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:44Z","lastTransitionTime":"2026-01-31T04:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.262039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.262079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.262090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.262109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.262122 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:44Z","lastTransitionTime":"2026-01-31T04:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.364747 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.364844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.364863 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.364890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.364910 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:44Z","lastTransitionTime":"2026-01-31T04:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.468030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.468076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.468089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.468106 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.468118 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:44Z","lastTransitionTime":"2026-01-31T04:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.570992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.571041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.571054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.571073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.571085 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:44Z","lastTransitionTime":"2026-01-31T04:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.674239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.674306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.674322 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.674348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.674367 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:44Z","lastTransitionTime":"2026-01-31T04:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.776588 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.776633 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.776645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.776669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.776684 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:44Z","lastTransitionTime":"2026-01-31T04:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.799891 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:36:39.095009931 +0000 UTC Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.858525 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:44 crc kubenswrapper[4832]: E0131 04:43:44.858715 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.879682 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.879742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.879756 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.879776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.879796 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:44Z","lastTransitionTime":"2026-01-31T04:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.982652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.982694 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.982706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.982724 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:44 crc kubenswrapper[4832]: I0131 04:43:44.982741 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:44Z","lastTransitionTime":"2026-01-31T04:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.085245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.085297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.085309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.085329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.085342 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:45Z","lastTransitionTime":"2026-01-31T04:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.189034 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.189163 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.189225 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.189254 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.189274 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:45Z","lastTransitionTime":"2026-01-31T04:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.292170 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.292222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.292237 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.292261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.292272 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:45Z","lastTransitionTime":"2026-01-31T04:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.395376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.395472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.395493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.395521 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.395541 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:45Z","lastTransitionTime":"2026-01-31T04:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.498344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.498395 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.498407 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.498427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.498439 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:45Z","lastTransitionTime":"2026-01-31T04:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.601634 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.601674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.601682 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.601698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.601708 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:45Z","lastTransitionTime":"2026-01-31T04:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.704789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.704844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.704861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.704885 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.704903 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:45Z","lastTransitionTime":"2026-01-31T04:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.800388 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 03:32:22.022002178 +0000 UTC Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.807824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.807872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.807884 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.807902 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.807916 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:45Z","lastTransitionTime":"2026-01-31T04:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.859248 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:45 crc kubenswrapper[4832]: E0131 04:43:45.859490 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.859965 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:45 crc kubenswrapper[4832]: E0131 04:43:45.860133 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.910416 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.910460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.910469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.910485 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:45 crc kubenswrapper[4832]: I0131 04:43:45.910496 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:45Z","lastTransitionTime":"2026-01-31T04:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.013523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.013859 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.013975 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.014086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.014177 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:46Z","lastTransitionTime":"2026-01-31T04:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.119932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.119984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.119997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.120016 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.120036 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:46Z","lastTransitionTime":"2026-01-31T04:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.222610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.222686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.222701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.222721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.222732 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:46Z","lastTransitionTime":"2026-01-31T04:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.325492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.325538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.325551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.325590 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.325603 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:46Z","lastTransitionTime":"2026-01-31T04:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.428403 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.428463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.428472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.428489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.428499 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:46Z","lastTransitionTime":"2026-01-31T04:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.532631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.532709 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.532725 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.532748 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.532762 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:46Z","lastTransitionTime":"2026-01-31T04:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.635628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.635694 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.635708 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.635733 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.635749 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:46Z","lastTransitionTime":"2026-01-31T04:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.686732 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6"] Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.687466 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.692282 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.692809 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.713045 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.728009 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.739481 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.739531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.739541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.739579 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.739590 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:46Z","lastTransitionTime":"2026-01-31T04:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.751155 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.780047 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70b23715-2e5a-45f7-9e0a-093c15037d3a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rxzd6\" (UID: \"70b23715-2e5a-45f7-9e0a-093c15037d3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.780504 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70b23715-2e5a-45f7-9e0a-093c15037d3a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rxzd6\" (UID: \"70b23715-2e5a-45f7-9e0a-093c15037d3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.780797 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70b23715-2e5a-45f7-9e0a-093c15037d3a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rxzd6\" (UID: \"70b23715-2e5a-45f7-9e0a-093c15037d3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.780993 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hqnn\" (UniqueName: \"kubernetes.io/projected/70b23715-2e5a-45f7-9e0a-093c15037d3a-kube-api-access-8hqnn\") pod \"ovnkube-control-plane-749d76644c-rxzd6\" (UID: \"70b23715-2e5a-45f7-9e0a-093c15037d3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.781973 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1e4560305c2a311b054a4ea0347eaaf42a14868de5f41a23246c535c31b2de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.799172 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.801322 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:52:00.729134802 +0000 UTC Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.816521 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.838822 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.842050 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.842101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.842115 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.842137 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.842151 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:46Z","lastTransitionTime":"2026-01-31T04:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.859154 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:46 crc kubenswrapper[4832]: E0131 04:43:46.859347 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.865670 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.879968 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.882395 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hqnn\" (UniqueName: \"kubernetes.io/projected/70b23715-2e5a-45f7-9e0a-093c15037d3a-kube-api-access-8hqnn\") pod \"ovnkube-control-plane-749d76644c-rxzd6\" (UID: \"70b23715-2e5a-45f7-9e0a-093c15037d3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.882705 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70b23715-2e5a-45f7-9e0a-093c15037d3a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rxzd6\" (UID: \"70b23715-2e5a-45f7-9e0a-093c15037d3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.882889 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70b23715-2e5a-45f7-9e0a-093c15037d3a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rxzd6\" (UID: \"70b23715-2e5a-45f7-9e0a-093c15037d3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.883121 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70b23715-2e5a-45f7-9e0a-093c15037d3a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rxzd6\" (UID: \"70b23715-2e5a-45f7-9e0a-093c15037d3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.883899 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70b23715-2e5a-45f7-9e0a-093c15037d3a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rxzd6\" (UID: \"70b23715-2e5a-45f7-9e0a-093c15037d3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.884329 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70b23715-2e5a-45f7-9e0a-093c15037d3a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rxzd6\" (UID: \"70b23715-2e5a-45f7-9e0a-093c15037d3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.892773 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70b23715-2e5a-45f7-9e0a-093c15037d3a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rxzd6\" (UID: \"70b23715-2e5a-45f7-9e0a-093c15037d3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.899279 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.899723 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hqnn\" (UniqueName: \"kubernetes.io/projected/70b23715-2e5a-45f7-9e0a-093c15037d3a-kube-api-access-8hqnn\") pod \"ovnkube-control-plane-749d76644c-rxzd6\" (UID: \"70b23715-2e5a-45f7-9e0a-093c15037d3a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.921008 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.934188 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.946098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.946156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.946167 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.946191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.946203 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:46Z","lastTransitionTime":"2026-01-31T04:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.948001 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.962012 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.984739 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:46 crc kubenswrapper[4832]: I0131 04:43:46.996611 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:46Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.011821 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" Jan 31 04:43:47 crc kubenswrapper[4832]: W0131 04:43:47.028870 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70b23715_2e5a_45f7_9e0a_093c15037d3a.slice/crio-99fb84252ece8d308f04fb71e65b0b24f4d29977422841e6b240f43d45baa96d WatchSource:0}: Error finding container 99fb84252ece8d308f04fb71e65b0b24f4d29977422841e6b240f43d45baa96d: Status 404 returned error can't find the container with id 99fb84252ece8d308f04fb71e65b0b24f4d29977422841e6b240f43d45baa96d Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.050046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.050103 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.050112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.050134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.050144 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:47Z","lastTransitionTime":"2026-01-31T04:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.150535 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovnkube-controller/0.log" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.152869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.152911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.152924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.152946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.152960 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:47Z","lastTransitionTime":"2026-01-31T04:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.165065 4832 generic.go:334] "Generic (PLEG): container finished" podID="e089fa33-e032-4755-8b7e-262adfecc82f" containerID="fa1e4560305c2a311b054a4ea0347eaaf42a14868de5f41a23246c535c31b2de" exitCode=1 Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.165178 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerDied","Data":"fa1e4560305c2a311b054a4ea0347eaaf42a14868de5f41a23246c535c31b2de"} Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.166271 4832 scope.go:117] "RemoveContainer" containerID="fa1e4560305c2a311b054a4ea0347eaaf42a14868de5f41a23246c535c31b2de" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.172762 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" event={"ID":"70b23715-2e5a-45f7-9e0a-093c15037d3a","Type":"ContainerStarted","Data":"99fb84252ece8d308f04fb71e65b0b24f4d29977422841e6b240f43d45baa96d"} Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.183237 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.198319 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.213178 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.240043 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1e4560305c2a311b054a4ea0347eaaf42a14868de5f41a23246c535c31b2de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1e4560305c2a311b054a4ea0347eaaf42a14868de5f41a23246c535c31b2de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"message\\\":\\\"ice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 04:43:45.664823 6100 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 04:43:45.664919 6100 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0131 04:43:45.665031 6100 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665098 6100 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665307 6100 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665432 6100 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665813 6100 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.260793 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.260832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.260845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.260863 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.260876 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:47Z","lastTransitionTime":"2026-01-31T04:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.283074 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.365326 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.367468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.367504 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.367517 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.367537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.367550 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:47Z","lastTransitionTime":"2026-01-31T04:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.385541 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.403101 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.418868 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.435611 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.454165 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.470292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.470341 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.470353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.470375 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.470387 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:47Z","lastTransitionTime":"2026-01-31T04:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.476461 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.488439 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.504861 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.526338 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.536658 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:47Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.572504 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.572577 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.572621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.572641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.572654 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:47Z","lastTransitionTime":"2026-01-31T04:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.675360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.675405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.675418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.675438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.675450 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:47Z","lastTransitionTime":"2026-01-31T04:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.778784 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.778819 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.778828 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.778845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.778866 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:47Z","lastTransitionTime":"2026-01-31T04:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.802501 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:13:53.20668466 +0000 UTC Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.858730 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:47 crc kubenswrapper[4832]: E0131 04:43:47.858892 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.859276 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:47 crc kubenswrapper[4832]: E0131 04:43:47.859335 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.881826 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.881872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.881886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.881901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.881916 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:47Z","lastTransitionTime":"2026-01-31T04:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.985422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.985481 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.985491 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.985511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:47 crc kubenswrapper[4832]: I0131 04:43:47.985525 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:47Z","lastTransitionTime":"2026-01-31T04:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.088147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.088193 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.088203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.088221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.088234 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:48Z","lastTransitionTime":"2026-01-31T04:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.099707 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.099886 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.099909 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:44:04.099878729 +0000 UTC m=+53.048700414 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.099962 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.099986 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.100058 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:44:04.100033964 +0000 UTC m=+53.048855869 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.100210 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.100335 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:44:04.100301472 +0000 UTC m=+53.049123157 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.184136 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovnkube-controller/0.log" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.189144 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerStarted","Data":"0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9"} Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.189357 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.190461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.190659 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.190760 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.190849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.190908 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:48Z","lastTransitionTime":"2026-01-31T04:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.191752 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" event={"ID":"70b23715-2e5a-45f7-9e0a-093c15037d3a","Type":"ContainerStarted","Data":"16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123"} Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.191826 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" event={"ID":"70b23715-2e5a-45f7-9e0a-093c15037d3a","Type":"ContainerStarted","Data":"2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8"} Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.201644 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.201705 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.201905 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.201941 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.201962 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.202043 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:44:04.20201003 +0000 UTC m=+53.150831725 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.201910 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.202208 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.202278 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.202410 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:44:04.202385061 +0000 UTC m=+53.151206746 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.211881 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.224092 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.240067 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.243383 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-rbg9h"] Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.244024 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.244157 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.257358 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.273542 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.290186 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.295116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.295162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.295173 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.295197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.295209 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:48Z","lastTransitionTime":"2026-01-31T04:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.302396 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs\") pod \"network-metrics-daemon-rbg9h\" (UID: \"88205cd8-6bbf-40af-a0d1-bfae431d97e7\") " pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.302435 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm6dc\" (UniqueName: \"kubernetes.io/projected/88205cd8-6bbf-40af-a0d1-bfae431d97e7-kube-api-access-nm6dc\") pod \"network-metrics-daemon-rbg9h\" (UID: \"88205cd8-6bbf-40af-a0d1-bfae431d97e7\") " pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.312971 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.331828 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.349018 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.364343 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.394634 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.397732 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.397820 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.397835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.397859 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.397875 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:48Z","lastTransitionTime":"2026-01-31T04:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.403382 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs\") pod \"network-metrics-daemon-rbg9h\" (UID: \"88205cd8-6bbf-40af-a0d1-bfae431d97e7\") " pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.403429 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm6dc\" (UniqueName: \"kubernetes.io/projected/88205cd8-6bbf-40af-a0d1-bfae431d97e7-kube-api-access-nm6dc\") pod \"network-metrics-daemon-rbg9h\" (UID: \"88205cd8-6bbf-40af-a0d1-bfae431d97e7\") " pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.403638 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.403766 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs podName:88205cd8-6bbf-40af-a0d1-bfae431d97e7 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:48.903733366 +0000 UTC m=+37.852555261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs") pod "network-metrics-daemon-rbg9h" (UID: "88205cd8-6bbf-40af-a0d1-bfae431d97e7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.409119 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.422622 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm6dc\" (UniqueName: \"kubernetes.io/projected/88205cd8-6bbf-40af-a0d1-bfae431d97e7-kube-api-access-nm6dc\") pod \"network-metrics-daemon-rbg9h\" (UID: \"88205cd8-6bbf-40af-a0d1-bfae431d97e7\") " pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.426598 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.444035 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.459513 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.484696 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1e4560305c2a311b054a4ea0347eaaf42a14868de5f41a23246c535c31b2de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"message\\\":\\\"ice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 04:43:45.664823 6100 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 04:43:45.664919 6100 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0131 04:43:45.665031 6100 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665098 6100 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665307 6100 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665432 6100 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665813 6100 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.501448 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.501737 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.501779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.502054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.502141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.502221 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:48Z","lastTransitionTime":"2026-01-31T04:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.517086 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.531486 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.547160 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.559232 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.575141 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.592711 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.604868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.604952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.604975 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.605008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.605035 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:48Z","lastTransitionTime":"2026-01-31T04:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.610736 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.624992 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.639918 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.672769 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.687139 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.704850 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.709993 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.710077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.710139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.710173 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.710204 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:48Z","lastTransitionTime":"2026-01-31T04:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.728324 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.749280 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.766674 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.787025 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1e4560305c2a311b054a4ea0347eaaf42a14868de5f41a23246c535c31b2de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"message\\\":\\\"ice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 04:43:45.664823 6100 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 04:43:45.664919 6100 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0131 04:43:45.665031 6100 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665098 6100 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665307 6100 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665432 6100 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665813 6100 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:48Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.803819 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:50:36.7620326 +0000 UTC Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.813131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.813174 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.813186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.813209 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.813226 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:48Z","lastTransitionTime":"2026-01-31T04:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.858929 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.859221 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.908655 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs\") pod \"network-metrics-daemon-rbg9h\" (UID: \"88205cd8-6bbf-40af-a0d1-bfae431d97e7\") " pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.908892 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:43:48 crc kubenswrapper[4832]: E0131 04:43:48.909035 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs podName:88205cd8-6bbf-40af-a0d1-bfae431d97e7 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:49.909004097 +0000 UTC m=+38.857825992 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs") pod "network-metrics-daemon-rbg9h" (UID: "88205cd8-6bbf-40af-a0d1-bfae431d97e7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.916923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.917165 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.917345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.917494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:48 crc kubenswrapper[4832]: I0131 04:43:48.917650 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:48Z","lastTransitionTime":"2026-01-31T04:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.020882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.020940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.020956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.020980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.020994 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:49Z","lastTransitionTime":"2026-01-31T04:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.057249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.057540 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.057659 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.057914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.058016 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:49Z","lastTransitionTime":"2026-01-31T04:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:49 crc kubenswrapper[4832]: E0131 04:43:49.080500 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.086382 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.087009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.087155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.087296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.087421 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:49Z","lastTransitionTime":"2026-01-31T04:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:49 crc kubenswrapper[4832]: E0131 04:43:49.108718 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.113958 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.114157 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.114288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.114448 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.114648 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:49Z","lastTransitionTime":"2026-01-31T04:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:49 crc kubenswrapper[4832]: E0131 04:43:49.132857 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.139418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.139635 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.139785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.139949 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.140086 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:49Z","lastTransitionTime":"2026-01-31T04:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:49 crc kubenswrapper[4832]: E0131 04:43:49.161487 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.166664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.166718 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.166729 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.166749 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.166763 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:49Z","lastTransitionTime":"2026-01-31T04:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:49 crc kubenswrapper[4832]: E0131 04:43:49.185005 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: E0131 04:43:49.185172 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.187371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.187407 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.187415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.187429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.187439 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:49Z","lastTransitionTime":"2026-01-31T04:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.198803 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovnkube-controller/1.log" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.199782 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovnkube-controller/0.log" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.203795 4832 generic.go:334] "Generic (PLEG): container finished" podID="e089fa33-e032-4755-8b7e-262adfecc82f" containerID="0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9" exitCode=1 Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.203859 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerDied","Data":"0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9"} Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.203996 4832 scope.go:117] "RemoveContainer" containerID="fa1e4560305c2a311b054a4ea0347eaaf42a14868de5f41a23246c535c31b2de" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.205546 4832 scope.go:117] "RemoveContainer" containerID="0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9" Jan 31 04:43:49 crc kubenswrapper[4832]: E0131 04:43:49.205906 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.231702 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.253088 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.270004 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.288593 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.290059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.290150 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.290170 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.290204 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.290223 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:49Z","lastTransitionTime":"2026-01-31T04:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.304039 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.319262 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.336496 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.355824 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.369540 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.390162 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.393883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.393925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.393940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.393965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.393980 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:49Z","lastTransitionTime":"2026-01-31T04:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.415780 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.433753 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.449618 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.467684 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.486517 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.496862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.496919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.496933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.496952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.496967 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:49Z","lastTransitionTime":"2026-01-31T04:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.509596 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.539759 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1e4560305c2a311b054a4ea0347eaaf42a14868de5f41a23246c535c31b2de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"message\\\":\\\"ice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 04:43:45.664823 6100 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 04:43:45.664919 6100 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0131 04:43:45.665031 6100 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665098 6100 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665307 6100 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665432 6100 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665813 6100 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:48.384287 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 04:43:48.384377 6279 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 04:43:48.384419 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 04:43:48.384425 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 04:43:48.384467 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:43:48.384473 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:43:48.384516 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:43:48.384574 6279 factory.go:656] Stopping watch factory\\\\nI0131 04:43:48.384588 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:43:48.384599 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 04:43:48.384611 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 04:43:48.384615 6279 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:43:48.384620 6279 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 04:43:48.384623 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 04:43:48.384644 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:43:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:49Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.600683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.600786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.600815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.600852 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.600879 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:49Z","lastTransitionTime":"2026-01-31T04:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.705020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.705075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.705092 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.705118 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.705140 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:49Z","lastTransitionTime":"2026-01-31T04:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.805080 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:54:54.157940737 +0000 UTC Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.808742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.808788 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.808799 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.808817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.808830 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:49Z","lastTransitionTime":"2026-01-31T04:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.858400 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.858488 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:49 crc kubenswrapper[4832]: E0131 04:43:49.858646 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.858400 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:49 crc kubenswrapper[4832]: E0131 04:43:49.858820 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:43:49 crc kubenswrapper[4832]: E0131 04:43:49.858933 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.912683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.912722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.912734 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.912753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.912767 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:49Z","lastTransitionTime":"2026-01-31T04:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:49 crc kubenswrapper[4832]: I0131 04:43:49.923495 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs\") pod \"network-metrics-daemon-rbg9h\" (UID: \"88205cd8-6bbf-40af-a0d1-bfae431d97e7\") " pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:49 crc kubenswrapper[4832]: E0131 04:43:49.923845 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:43:49 crc kubenswrapper[4832]: E0131 04:43:49.924097 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs podName:88205cd8-6bbf-40af-a0d1-bfae431d97e7 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:51.924055407 +0000 UTC m=+40.872877302 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs") pod "network-metrics-daemon-rbg9h" (UID: "88205cd8-6bbf-40af-a0d1-bfae431d97e7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.015731 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.015797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.015812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.015835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.015851 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:50Z","lastTransitionTime":"2026-01-31T04:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.119467 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.119515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.119528 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.119548 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.119587 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:50Z","lastTransitionTime":"2026-01-31T04:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.213052 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovnkube-controller/1.log" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.222427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.222490 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.222514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.222538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.222582 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:50Z","lastTransitionTime":"2026-01-31T04:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.325702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.325754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.325767 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.325816 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.325830 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:50Z","lastTransitionTime":"2026-01-31T04:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.429854 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.430029 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.430058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.430131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.430160 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:50Z","lastTransitionTime":"2026-01-31T04:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.533240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.533286 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.533302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.533324 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.533340 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:50Z","lastTransitionTime":"2026-01-31T04:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.636205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.636536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.636639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.636710 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.636770 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:50Z","lastTransitionTime":"2026-01-31T04:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.740281 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.740343 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.740360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.740384 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.740402 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:50Z","lastTransitionTime":"2026-01-31T04:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.806076 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:55:39.454030285 +0000 UTC Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.843134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.843278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.843299 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.843325 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.843352 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:50Z","lastTransitionTime":"2026-01-31T04:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.858715 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:50 crc kubenswrapper[4832]: E0131 04:43:50.858939 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.946860 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.946941 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.946960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.946986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:50 crc kubenswrapper[4832]: I0131 04:43:50.947004 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:50Z","lastTransitionTime":"2026-01-31T04:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.050756 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.050804 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.050820 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.050841 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.050853 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:51Z","lastTransitionTime":"2026-01-31T04:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.152727 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.152781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.152809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.152830 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.152875 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:51Z","lastTransitionTime":"2026-01-31T04:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.255183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.255229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.255239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.255256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.255267 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:51Z","lastTransitionTime":"2026-01-31T04:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.359711 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.359769 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.359788 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.359890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.359911 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:51Z","lastTransitionTime":"2026-01-31T04:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.462626 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.462685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.462698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.462716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.463026 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:51Z","lastTransitionTime":"2026-01-31T04:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.565806 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.565849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.565861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.565884 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.565900 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:51Z","lastTransitionTime":"2026-01-31T04:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.668719 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.669313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.669394 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.669488 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.669659 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:51Z","lastTransitionTime":"2026-01-31T04:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.772772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.773170 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.773304 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.773398 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.773465 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:51Z","lastTransitionTime":"2026-01-31T04:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.806591 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 21:50:29.918246172 +0000 UTC Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.859347 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:51 crc kubenswrapper[4832]: E0131 04:43:51.859783 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.859425 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:51 crc kubenswrapper[4832]: E0131 04:43:51.860050 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.859393 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:51 crc kubenswrapper[4832]: E0131 04:43:51.860307 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.877755 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.877792 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.877818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.877835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.877845 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:51Z","lastTransitionTime":"2026-01-31T04:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.881582 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.893288 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.906280 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.929041 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1e4560305c2a311b054a4ea0347eaaf42a14868de5f41a23246c535c31b2de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"message\\\":\\\"ice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 04:43:45.664823 6100 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 04:43:45.664919 6100 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0131 04:43:45.665031 6100 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665098 6100 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665307 6100 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665432 6100 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:45.665813 6100 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:48.384287 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 04:43:48.384377 6279 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 04:43:48.384419 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 04:43:48.384425 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 04:43:48.384467 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:43:48.384473 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:43:48.384516 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:43:48.384574 6279 factory.go:656] Stopping watch factory\\\\nI0131 04:43:48.384588 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:43:48.384599 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 04:43:48.384611 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 04:43:48.384615 6279 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:43:48.384620 6279 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 04:43:48.384623 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 04:43:48.384644 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:43:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.941800 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.944087 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs\") pod \"network-metrics-daemon-rbg9h\" (UID: \"88205cd8-6bbf-40af-a0d1-bfae431d97e7\") " pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:51 crc kubenswrapper[4832]: E0131 04:43:51.944287 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:43:51 crc kubenswrapper[4832]: E0131 04:43:51.944459 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs podName:88205cd8-6bbf-40af-a0d1-bfae431d97e7 nodeName:}" failed. No retries permitted until 2026-01-31 04:43:55.944444035 +0000 UTC m=+44.893265720 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs") pod "network-metrics-daemon-rbg9h" (UID: "88205cd8-6bbf-40af-a0d1-bfae431d97e7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.954213 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.972977 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.980338 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.980380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.980392 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.980409 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.980419 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:51Z","lastTransitionTime":"2026-01-31T04:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:51 crc kubenswrapper[4832]: I0131 04:43:51.984920 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.000606 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.014083 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.024809 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.039972 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.056735 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.068770 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.082700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.082874 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.082943 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.083022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.083088 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:52Z","lastTransitionTime":"2026-01-31T04:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.094879 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.111122 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.124502 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.186266 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.186316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.186326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.186344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.186354 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:52Z","lastTransitionTime":"2026-01-31T04:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.289249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.289309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.289318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.289335 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.289347 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:52Z","lastTransitionTime":"2026-01-31T04:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.392382 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.392424 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.392436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.392451 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.392461 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:52Z","lastTransitionTime":"2026-01-31T04:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.495162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.495198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.495209 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.495223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.495234 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:52Z","lastTransitionTime":"2026-01-31T04:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.598472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.598935 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.598962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.598993 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.599016 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:52Z","lastTransitionTime":"2026-01-31T04:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.701541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.701676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.701696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.701721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.701738 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:52Z","lastTransitionTime":"2026-01-31T04:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.805764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.805863 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.805890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.805925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.805944 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:52Z","lastTransitionTime":"2026-01-31T04:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.806769 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:14:35.980487365 +0000 UTC Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.858776 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:52 crc kubenswrapper[4832]: E0131 04:43:52.858985 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.909726 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.909845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.909871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.909907 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:52 crc kubenswrapper[4832]: I0131 04:43:52.909930 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:52Z","lastTransitionTime":"2026-01-31T04:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.013037 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.013102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.013117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.013137 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.013152 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:53Z","lastTransitionTime":"2026-01-31T04:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.115806 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.115886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.115912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.115945 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.115972 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:53Z","lastTransitionTime":"2026-01-31T04:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.218854 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.218914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.218931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.218954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.218968 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:53Z","lastTransitionTime":"2026-01-31T04:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.322018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.322049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.322057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.322072 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.322081 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:53Z","lastTransitionTime":"2026-01-31T04:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.425254 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.425324 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.425348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.425382 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.425408 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:53Z","lastTransitionTime":"2026-01-31T04:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.528408 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.528464 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.528478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.528497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.528510 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:53Z","lastTransitionTime":"2026-01-31T04:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.631276 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.631339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.631357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.631386 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.631405 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:53Z","lastTransitionTime":"2026-01-31T04:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.734238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.734300 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.734316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.734342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.734362 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:53Z","lastTransitionTime":"2026-01-31T04:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.807526 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:23:59.43239833 +0000 UTC Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.837641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.837721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.837746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.837778 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.837802 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:53Z","lastTransitionTime":"2026-01-31T04:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.859404 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.859401 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:53 crc kubenswrapper[4832]: E0131 04:43:53.859701 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:43:53 crc kubenswrapper[4832]: E0131 04:43:53.859794 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.859719 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:53 crc kubenswrapper[4832]: E0131 04:43:53.859970 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.941176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.941217 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.941229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.941247 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:53 crc kubenswrapper[4832]: I0131 04:43:53.941259 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:53Z","lastTransitionTime":"2026-01-31T04:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.045068 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.045142 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.045165 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.045197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.045220 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:54Z","lastTransitionTime":"2026-01-31T04:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.148416 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.148478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.148495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.148520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.148538 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:54Z","lastTransitionTime":"2026-01-31T04:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.251827 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.251899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.251925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.251956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.251981 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:54Z","lastTransitionTime":"2026-01-31T04:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.355075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.355117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.355127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.355146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.355158 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:54Z","lastTransitionTime":"2026-01-31T04:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.457864 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.457920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.457933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.457953 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.457969 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:54Z","lastTransitionTime":"2026-01-31T04:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.560861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.560923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.560936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.560956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.560970 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:54Z","lastTransitionTime":"2026-01-31T04:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.664212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.664274 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.664292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.664322 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.664342 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:54Z","lastTransitionTime":"2026-01-31T04:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.767791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.767859 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.767877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.767906 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.767926 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:54Z","lastTransitionTime":"2026-01-31T04:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.808645 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 03:14:35.316807756 +0000 UTC Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.859371 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:54 crc kubenswrapper[4832]: E0131 04:43:54.859668 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.870932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.870985 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.871003 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.871028 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.871044 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:54Z","lastTransitionTime":"2026-01-31T04:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.973616 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.973686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.973707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.973739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:54 crc kubenswrapper[4832]: I0131 04:43:54.973761 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:54Z","lastTransitionTime":"2026-01-31T04:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.077089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.077155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.077180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.077213 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.077234 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:55Z","lastTransitionTime":"2026-01-31T04:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.180930 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.180990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.181009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.181032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.181049 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:55Z","lastTransitionTime":"2026-01-31T04:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.283639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.283703 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.283721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.283747 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.283764 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:55Z","lastTransitionTime":"2026-01-31T04:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.387079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.387146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.387170 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.387200 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.387221 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:55Z","lastTransitionTime":"2026-01-31T04:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.490718 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.490769 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.490781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.490801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.490817 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:55Z","lastTransitionTime":"2026-01-31T04:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.593297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.593345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.593364 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.593431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.593440 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:55Z","lastTransitionTime":"2026-01-31T04:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.696506 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.696580 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.696597 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.696622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.696641 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:55Z","lastTransitionTime":"2026-01-31T04:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.799314 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.799356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.799384 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.799402 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.799412 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:55Z","lastTransitionTime":"2026-01-31T04:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.808801 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:25:08.173312065 +0000 UTC Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.859337 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:55 crc kubenswrapper[4832]: E0131 04:43:55.859496 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.859350 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.859648 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:55 crc kubenswrapper[4832]: E0131 04:43:55.859908 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:55 crc kubenswrapper[4832]: E0131 04:43:55.860110 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.901654 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.901728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.901745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.901765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.901778 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:55Z","lastTransitionTime":"2026-01-31T04:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:55 crc kubenswrapper[4832]: I0131 04:43:55.988999 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs\") pod \"network-metrics-daemon-rbg9h\" (UID: \"88205cd8-6bbf-40af-a0d1-bfae431d97e7\") " pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:55 crc kubenswrapper[4832]: E0131 04:43:55.989220 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:43:55 crc kubenswrapper[4832]: E0131 04:43:55.989344 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs podName:88205cd8-6bbf-40af-a0d1-bfae431d97e7 nodeName:}" failed. No retries permitted until 2026-01-31 04:44:03.989306063 +0000 UTC m=+52.938127788 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs") pod "network-metrics-daemon-rbg9h" (UID: "88205cd8-6bbf-40af-a0d1-bfae431d97e7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.004265 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.004309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.004325 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.004346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.004360 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:56Z","lastTransitionTime":"2026-01-31T04:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.107597 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.107636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.107663 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.107680 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.107690 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:56Z","lastTransitionTime":"2026-01-31T04:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.210742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.210818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.210837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.210863 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.210882 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:56Z","lastTransitionTime":"2026-01-31T04:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.314867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.314956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.314982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.315015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.315042 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:56Z","lastTransitionTime":"2026-01-31T04:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.418418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.418480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.418500 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.418531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.418554 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:56Z","lastTransitionTime":"2026-01-31T04:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.521485 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.521589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.521612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.521637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.521654 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:56Z","lastTransitionTime":"2026-01-31T04:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.625073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.625163 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.625196 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.625228 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.625246 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:56Z","lastTransitionTime":"2026-01-31T04:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.728187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.728259 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.728282 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.728311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.728333 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:56Z","lastTransitionTime":"2026-01-31T04:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.809656 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:36:30.456299752 +0000 UTC Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.831781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.831861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.831889 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.831922 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.831946 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:56Z","lastTransitionTime":"2026-01-31T04:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.859396 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:56 crc kubenswrapper[4832]: E0131 04:43:56.859635 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.935989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.936038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.936055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.936073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:56 crc kubenswrapper[4832]: I0131 04:43:56.936084 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:56Z","lastTransitionTime":"2026-01-31T04:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.039248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.039291 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.039303 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.039321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.039335 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:57Z","lastTransitionTime":"2026-01-31T04:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.141696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.141730 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.141738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.141752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.141761 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:57Z","lastTransitionTime":"2026-01-31T04:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.246363 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.246433 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.246459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.246493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.246513 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:57Z","lastTransitionTime":"2026-01-31T04:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.350015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.350066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.350079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.350095 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.350108 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:57Z","lastTransitionTime":"2026-01-31T04:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.453229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.453269 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.453278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.453294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.453306 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:57Z","lastTransitionTime":"2026-01-31T04:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.556357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.556393 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.556429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.556447 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.556457 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:57Z","lastTransitionTime":"2026-01-31T04:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.659614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.659664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.659680 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.659706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.659724 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:57Z","lastTransitionTime":"2026-01-31T04:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.762714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.762748 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.762759 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.762773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.762783 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:57Z","lastTransitionTime":"2026-01-31T04:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.810529 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:13:03.536566129 +0000 UTC Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.858995 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:57 crc kubenswrapper[4832]: E0131 04:43:57.859218 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.860109 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:57 crc kubenswrapper[4832]: E0131 04:43:57.860224 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.860304 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:57 crc kubenswrapper[4832]: E0131 04:43:57.860391 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.865245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.865293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.865350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.865376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.865393 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:57Z","lastTransitionTime":"2026-01-31T04:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.896439 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.897240 4832 scope.go:117] "RemoveContainer" containerID="0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9" Jan 31 04:43:57 crc kubenswrapper[4832]: E0131 04:43:57.897422 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.929902 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:48.384287 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 04:43:48.384377 6279 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 04:43:48.384419 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 04:43:48.384425 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 04:43:48.384467 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:43:48.384473 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:43:48.384516 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:43:48.384574 6279 factory.go:656] Stopping watch factory\\\\nI0131 04:43:48.384588 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:43:48.384599 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 04:43:48.384611 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 04:43:48.384615 6279 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:43:48.384620 6279 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 04:43:48.384623 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 04:43:48.384644 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:43:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.952611 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.968178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.968229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.968246 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.968271 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.968291 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:57Z","lastTransitionTime":"2026-01-31T04:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.970622 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:57 crc kubenswrapper[4832]: I0131 04:43:57.993617 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:57Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.012626 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.028406 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.044204 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.059983 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.071668 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.071731 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.071744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.071765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.071780 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:58Z","lastTransitionTime":"2026-01-31T04:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.077404 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.094096 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.108861 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.127092 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.141738 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.160650 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.176952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.177007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.177023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.177047 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.177067 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:58Z","lastTransitionTime":"2026-01-31T04:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.190817 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.203493 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.217150 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:58Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.280298 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.280333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.280345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.280364 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.280375 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:58Z","lastTransitionTime":"2026-01-31T04:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.384681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.384737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.384751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.384770 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.384783 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:58Z","lastTransitionTime":"2026-01-31T04:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.487990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.488073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.488100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.488131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.488153 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:58Z","lastTransitionTime":"2026-01-31T04:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.591615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.591670 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.591689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.591712 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.591726 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:58Z","lastTransitionTime":"2026-01-31T04:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.695256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.695297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.695313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.695339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.695357 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:58Z","lastTransitionTime":"2026-01-31T04:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.798517 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.798552 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.798587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.798606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.798619 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:58Z","lastTransitionTime":"2026-01-31T04:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.811478 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:47:19.797184719 +0000 UTC Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.858760 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:43:58 crc kubenswrapper[4832]: E0131 04:43:58.858944 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.901410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.901457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.901477 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.901501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:58 crc kubenswrapper[4832]: I0131 04:43:58.901519 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:58Z","lastTransitionTime":"2026-01-31T04:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.005301 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.005359 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.005377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.005403 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.005420 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:59Z","lastTransitionTime":"2026-01-31T04:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.109157 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.109210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.109224 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.109245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.109256 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:59Z","lastTransitionTime":"2026-01-31T04:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.212492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.212535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.212545 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.212591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.212614 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:59Z","lastTransitionTime":"2026-01-31T04:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.274750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.274855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.274877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.274901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.274961 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:59Z","lastTransitionTime":"2026-01-31T04:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:59 crc kubenswrapper[4832]: E0131 04:43:59.294686 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:59Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.300052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.300089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.300098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.300113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.300122 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:59Z","lastTransitionTime":"2026-01-31T04:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:59 crc kubenswrapper[4832]: E0131 04:43:59.318492 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:59Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.323097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.323143 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.323157 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.323179 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.323189 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:59Z","lastTransitionTime":"2026-01-31T04:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:59 crc kubenswrapper[4832]: E0131 04:43:59.343397 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:59Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.348275 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.348317 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.348329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.348373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.348404 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:59Z","lastTransitionTime":"2026-01-31T04:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:59 crc kubenswrapper[4832]: E0131 04:43:59.365442 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:59Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.370835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.370918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.370944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.370979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.371002 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:59Z","lastTransitionTime":"2026-01-31T04:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:59 crc kubenswrapper[4832]: E0131 04:43:59.387612 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:43:59Z is after 2025-08-24T17:21:41Z" Jan 31 04:43:59 crc kubenswrapper[4832]: E0131 04:43:59.387744 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.389505 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.389530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.389538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.389553 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.389579 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:59Z","lastTransitionTime":"2026-01-31T04:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.492293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.492334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.492346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.492362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.492372 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:59Z","lastTransitionTime":"2026-01-31T04:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.594981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.595041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.595062 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.595091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.595114 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:59Z","lastTransitionTime":"2026-01-31T04:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.697892 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.697950 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.697962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.697982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.697993 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:59Z","lastTransitionTime":"2026-01-31T04:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.800650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.800757 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.800774 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.800795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.800809 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:59Z","lastTransitionTime":"2026-01-31T04:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.811801 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 06:52:13.399710973 +0000 UTC Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.858751 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.858878 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:43:59 crc kubenswrapper[4832]: E0131 04:43:59.858979 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.858997 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:43:59 crc kubenswrapper[4832]: E0131 04:43:59.859132 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:43:59 crc kubenswrapper[4832]: E0131 04:43:59.859364 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.904092 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.904179 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.904194 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.904214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:43:59 crc kubenswrapper[4832]: I0131 04:43:59.904228 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:43:59Z","lastTransitionTime":"2026-01-31T04:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.006969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.007007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.007016 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.007032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.007042 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:00Z","lastTransitionTime":"2026-01-31T04:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.110212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.110258 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.110267 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.110284 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.110296 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:00Z","lastTransitionTime":"2026-01-31T04:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.212965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.213035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.213047 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.213066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.213080 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:00Z","lastTransitionTime":"2026-01-31T04:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.315352 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.315418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.315431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.315471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.315485 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:00Z","lastTransitionTime":"2026-01-31T04:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.418099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.418214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.418239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.418270 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.418295 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:00Z","lastTransitionTime":"2026-01-31T04:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.522185 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.522235 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.522243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.522259 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.522293 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:00Z","lastTransitionTime":"2026-01-31T04:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.628944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.629018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.629035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.629059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.629076 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:00Z","lastTransitionTime":"2026-01-31T04:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.732339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.732400 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.732423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.732455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.732481 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:00Z","lastTransitionTime":"2026-01-31T04:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.812174 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:23:00.532798508 +0000 UTC Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.835959 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.836036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.836049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.836074 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.836087 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:00Z","lastTransitionTime":"2026-01-31T04:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.858722 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:00 crc kubenswrapper[4832]: E0131 04:44:00.858896 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.938886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.938959 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.938982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.939013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:00 crc kubenswrapper[4832]: I0131 04:44:00.939038 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:00Z","lastTransitionTime":"2026-01-31T04:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.043057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.043117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.043136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.043162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.043177 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:01Z","lastTransitionTime":"2026-01-31T04:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.147306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.147361 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.147373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.147394 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.147407 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:01Z","lastTransitionTime":"2026-01-31T04:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.250368 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.250440 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.250457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.250476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.250489 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:01Z","lastTransitionTime":"2026-01-31T04:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.353688 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.353739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.353756 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.353775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.353786 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:01Z","lastTransitionTime":"2026-01-31T04:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.456886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.456939 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.456955 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.456976 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.456991 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:01Z","lastTransitionTime":"2026-01-31T04:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.560083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.560151 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.560163 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.560183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.560196 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:01Z","lastTransitionTime":"2026-01-31T04:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.663336 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.663379 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.663393 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.663415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.663429 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:01Z","lastTransitionTime":"2026-01-31T04:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.765976 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.766045 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.766056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.766071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.766081 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:01Z","lastTransitionTime":"2026-01-31T04:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.812841 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:02:15.430828923 +0000 UTC Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.858639 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.858790 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.858660 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:01 crc kubenswrapper[4832]: E0131 04:44:01.858900 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:01 crc kubenswrapper[4832]: E0131 04:44:01.859076 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:01 crc kubenswrapper[4832]: E0131 04:44:01.859243 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.868752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.869626 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.869780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.869824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.869840 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:01Z","lastTransitionTime":"2026-01-31T04:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.887498 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:01Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.904499 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:01Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.920317 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:01Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.938004 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:01Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.979960 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:01Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.982487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.982534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.982546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.982582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:01 crc kubenswrapper[4832]: I0131 04:44:01.982595 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:01Z","lastTransitionTime":"2026-01-31T04:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.007980 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.028806 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:48.384287 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 04:43:48.384377 6279 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 04:43:48.384419 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 04:43:48.384425 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 04:43:48.384467 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:43:48.384473 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:43:48.384516 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:43:48.384574 6279 factory.go:656] Stopping watch factory\\\\nI0131 04:43:48.384588 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:43:48.384599 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 04:43:48.384611 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 04:43:48.384615 6279 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:43:48.384620 6279 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 04:43:48.384623 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 04:43:48.384644 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:43:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.042102 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.054257 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.067484 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.079745 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.085120 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.085245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.085307 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.085378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.085449 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:02Z","lastTransitionTime":"2026-01-31T04:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.090828 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.100914 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.119444 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.133699 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.151371 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.166358 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:02Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.187584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.187644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.187660 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.187689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.187708 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:02Z","lastTransitionTime":"2026-01-31T04:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.290795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.290844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.290861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.290886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.290903 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:02Z","lastTransitionTime":"2026-01-31T04:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.394091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.394172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.394218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.394246 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.394268 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:02Z","lastTransitionTime":"2026-01-31T04:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.496872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.496963 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.496985 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.497008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.497062 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:02Z","lastTransitionTime":"2026-01-31T04:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.599699 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.599747 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.599765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.599787 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.599804 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:02Z","lastTransitionTime":"2026-01-31T04:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.702370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.702413 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.702422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.702438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.702450 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:02Z","lastTransitionTime":"2026-01-31T04:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.805689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.805752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.805770 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.805796 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.805814 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:02Z","lastTransitionTime":"2026-01-31T04:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.812987 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:59:56.509578964 +0000 UTC Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.858776 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:02 crc kubenswrapper[4832]: E0131 04:44:02.858939 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.908728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.908779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.908791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.908817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:02 crc kubenswrapper[4832]: I0131 04:44:02.908833 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:02Z","lastTransitionTime":"2026-01-31T04:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.011235 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.011288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.011303 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.011323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.011336 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:03Z","lastTransitionTime":"2026-01-31T04:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.114603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.114667 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.114682 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.114701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.114722 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:03Z","lastTransitionTime":"2026-01-31T04:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.217206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.217256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.217266 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.217283 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.217294 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:03Z","lastTransitionTime":"2026-01-31T04:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.320197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.320261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.320276 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.320298 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.320316 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:03Z","lastTransitionTime":"2026-01-31T04:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.423279 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.423319 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.423329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.423345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.423355 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:03Z","lastTransitionTime":"2026-01-31T04:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.526707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.526776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.526794 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.526820 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.526843 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:03Z","lastTransitionTime":"2026-01-31T04:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.630225 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.630300 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.630318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.630345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.630364 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:03Z","lastTransitionTime":"2026-01-31T04:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.734063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.734112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.734131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.734159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.734177 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:03Z","lastTransitionTime":"2026-01-31T04:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.813855 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 08:32:21.805904418 +0000 UTC Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.837313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.837365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.837381 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.837402 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.837416 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:03Z","lastTransitionTime":"2026-01-31T04:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.859174 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.859257 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:03 crc kubenswrapper[4832]: E0131 04:44:03.859368 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:03 crc kubenswrapper[4832]: E0131 04:44:03.859491 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.859666 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:03 crc kubenswrapper[4832]: E0131 04:44:03.859839 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.940478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.940551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.940610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.940646 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:03 crc kubenswrapper[4832]: I0131 04:44:03.940672 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:03Z","lastTransitionTime":"2026-01-31T04:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.002893 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs\") pod \"network-metrics-daemon-rbg9h\" (UID: \"88205cd8-6bbf-40af-a0d1-bfae431d97e7\") " pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.003105 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.003200 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs podName:88205cd8-6bbf-40af-a0d1-bfae431d97e7 nodeName:}" failed. No retries permitted until 2026-01-31 04:44:20.003172159 +0000 UTC m=+68.951993884 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs") pod "network-metrics-daemon-rbg9h" (UID: "88205cd8-6bbf-40af-a0d1-bfae431d97e7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.044260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.044316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.044332 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.044356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.044372 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:04Z","lastTransitionTime":"2026-01-31T04:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.103805 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.104027 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:44:36.10398484 +0000 UTC m=+85.052806565 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.104119 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.104345 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.104352 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.104450 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:44:36.104424303 +0000 UTC m=+85.053246008 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.104507 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.104783 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:44:36.104737003 +0000 UTC m=+85.053558688 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.148251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.148333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.148360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.148395 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.148419 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:04Z","lastTransitionTime":"2026-01-31T04:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.205455 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.205521 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.205729 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.205754 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.205768 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.205843 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:44:36.205824822 +0000 UTC m=+85.154646507 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.205982 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.206036 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.206063 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.206164 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:44:36.206131812 +0000 UTC m=+85.154953537 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.251702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.251795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.251813 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.251848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.251870 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:04Z","lastTransitionTime":"2026-01-31T04:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.355538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.355688 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.355710 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.355745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.355766 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:04Z","lastTransitionTime":"2026-01-31T04:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.459584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.459630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.459639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.459657 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.459669 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:04Z","lastTransitionTime":"2026-01-31T04:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.562724 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.562824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.562846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.562878 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.562903 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:04Z","lastTransitionTime":"2026-01-31T04:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.666377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.666433 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.666445 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.666464 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.666476 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:04Z","lastTransitionTime":"2026-01-31T04:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.769105 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.769147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.769156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.769196 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.769209 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:04Z","lastTransitionTime":"2026-01-31T04:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.814783 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:10:31.223801876 +0000 UTC Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.858723 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:04 crc kubenswrapper[4832]: E0131 04:44:04.858906 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.872070 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.872147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.872175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.872203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.872221 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:04Z","lastTransitionTime":"2026-01-31T04:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.918713 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.929028 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.935337 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:04Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.951990 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:04Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.965774 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:04Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.974039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.974079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.974090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.974108 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.974121 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:04Z","lastTransitionTime":"2026-01-31T04:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.978938 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:04Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:04 crc kubenswrapper[4832]: I0131 04:44:04.999175 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:04Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.013634 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.030926 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.047298 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.062467 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.073783 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.076978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.077039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.077053 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.077076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.077090 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:05Z","lastTransitionTime":"2026-01-31T04:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.097746 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.110100 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.123935 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.140000 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.156676 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.173143 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.179226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.179268 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.179280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.179299 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.179310 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:05Z","lastTransitionTime":"2026-01-31T04:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.192479 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:48.384287 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 04:43:48.384377 6279 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 04:43:48.384419 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 04:43:48.384425 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 04:43:48.384467 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:43:48.384473 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:43:48.384516 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:43:48.384574 6279 factory.go:656] Stopping watch factory\\\\nI0131 04:43:48.384588 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:43:48.384599 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 04:43:48.384611 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 04:43:48.384615 6279 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:43:48.384620 6279 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 04:43:48.384623 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 04:43:48.384644 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:43:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:05Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.281002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.281036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.281047 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.281064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.281074 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:05Z","lastTransitionTime":"2026-01-31T04:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.383933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.383967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.383977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.383994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.384006 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:05Z","lastTransitionTime":"2026-01-31T04:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.487382 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.487428 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.487445 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.487464 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.487476 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:05Z","lastTransitionTime":"2026-01-31T04:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.590123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.590195 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.590214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.590242 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.590264 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:05Z","lastTransitionTime":"2026-01-31T04:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.693441 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.693487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.693497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.693513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.693526 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:05Z","lastTransitionTime":"2026-01-31T04:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.796463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.796524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.796537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.796584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.796600 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:05Z","lastTransitionTime":"2026-01-31T04:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.815074 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:57:31.522583462 +0000 UTC Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.858844 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.858887 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.858875 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:05 crc kubenswrapper[4832]: E0131 04:44:05.859018 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:05 crc kubenswrapper[4832]: E0131 04:44:05.859081 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:05 crc kubenswrapper[4832]: E0131 04:44:05.859150 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.900031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.900086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.900100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.900119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:05 crc kubenswrapper[4832]: I0131 04:44:05.900132 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:05Z","lastTransitionTime":"2026-01-31T04:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.003491 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.003537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.003546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.003578 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.003591 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:06Z","lastTransitionTime":"2026-01-31T04:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.106124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.106203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.106218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.106236 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.106251 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:06Z","lastTransitionTime":"2026-01-31T04:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.208926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.208987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.209005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.209025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.209038 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:06Z","lastTransitionTime":"2026-01-31T04:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.311323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.311373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.311382 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.311400 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.311411 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:06Z","lastTransitionTime":"2026-01-31T04:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.413993 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.414036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.414045 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.414064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.414075 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:06Z","lastTransitionTime":"2026-01-31T04:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.516098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.516136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.516146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.516163 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.516176 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:06Z","lastTransitionTime":"2026-01-31T04:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.623338 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.623399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.623412 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.624707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.624750 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:06Z","lastTransitionTime":"2026-01-31T04:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.728430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.728504 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.728530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.728607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.728637 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:06Z","lastTransitionTime":"2026-01-31T04:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.816065 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:45:55.200682715 +0000 UTC Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.831407 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.831435 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.831444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.831458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.831466 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:06Z","lastTransitionTime":"2026-01-31T04:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.858520 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:06 crc kubenswrapper[4832]: E0131 04:44:06.858826 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.933726 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.934004 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.934153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.934279 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:06 crc kubenswrapper[4832]: I0131 04:44:06.934377 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:06Z","lastTransitionTime":"2026-01-31T04:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.037123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.037369 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.037427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.037531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.037652 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:07Z","lastTransitionTime":"2026-01-31T04:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.140503 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.140551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.140581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.140601 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.140613 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:07Z","lastTransitionTime":"2026-01-31T04:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.242856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.242924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.242938 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.242961 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.242972 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:07Z","lastTransitionTime":"2026-01-31T04:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.345973 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.346039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.346055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.346081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.346099 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:07Z","lastTransitionTime":"2026-01-31T04:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.449418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.449493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.449512 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.449546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.449595 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:07Z","lastTransitionTime":"2026-01-31T04:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.552936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.552990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.553008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.553032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.553049 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:07Z","lastTransitionTime":"2026-01-31T04:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.656646 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.656993 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.657074 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.657145 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.657209 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:07Z","lastTransitionTime":"2026-01-31T04:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.761241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.761303 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.761321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.761350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.761369 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:07Z","lastTransitionTime":"2026-01-31T04:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.817140 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:52:22.552913015 +0000 UTC Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.858989 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.859127 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:07 crc kubenswrapper[4832]: E0131 04:44:07.859190 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.859126 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:07 crc kubenswrapper[4832]: E0131 04:44:07.859350 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:07 crc kubenswrapper[4832]: E0131 04:44:07.859422 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.863971 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.864018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.864037 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.864063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.864082 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:07Z","lastTransitionTime":"2026-01-31T04:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.966622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.966681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.966693 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.966708 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:07 crc kubenswrapper[4832]: I0131 04:44:07.966717 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:07Z","lastTransitionTime":"2026-01-31T04:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.069835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.069897 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.069911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.069932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.069947 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:08Z","lastTransitionTime":"2026-01-31T04:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.173687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.173755 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.173764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.173784 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.173796 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:08Z","lastTransitionTime":"2026-01-31T04:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.277294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.277348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.277365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.277385 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.277397 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:08Z","lastTransitionTime":"2026-01-31T04:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.381173 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.381242 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.381259 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.381285 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.381304 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:08Z","lastTransitionTime":"2026-01-31T04:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.484500 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.484591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.484607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.484630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.484644 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:08Z","lastTransitionTime":"2026-01-31T04:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.588378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.588472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.588487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.588534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.588593 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:08Z","lastTransitionTime":"2026-01-31T04:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.692532 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.692614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.692625 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.692644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.692656 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:08Z","lastTransitionTime":"2026-01-31T04:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.795856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.795937 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.795957 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.795986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.796006 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:08Z","lastTransitionTime":"2026-01-31T04:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.817808 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:09:52.300574734 +0000 UTC Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.859343 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:08 crc kubenswrapper[4832]: E0131 04:44:08.859534 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.899276 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.899350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.899375 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.899413 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:08 crc kubenswrapper[4832]: I0131 04:44:08.899436 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:08Z","lastTransitionTime":"2026-01-31T04:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.002011 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.002067 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.002087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.002108 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.002122 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:09Z","lastTransitionTime":"2026-01-31T04:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.108276 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.108359 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.108410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.108432 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.108476 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:09Z","lastTransitionTime":"2026-01-31T04:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.211954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.212018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.212036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.212060 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.212079 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:09Z","lastTransitionTime":"2026-01-31T04:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.316082 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.316197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.316219 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.316257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.316279 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:09Z","lastTransitionTime":"2026-01-31T04:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.398977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.399045 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.399064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.399090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.399108 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:09Z","lastTransitionTime":"2026-01-31T04:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:09 crc kubenswrapper[4832]: E0131 04:44:09.418318 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:09Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.424903 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.425058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.425109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.425129 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.425144 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:09Z","lastTransitionTime":"2026-01-31T04:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:09 crc kubenswrapper[4832]: E0131 04:44:09.441810 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:09Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.448209 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.448272 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.448284 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.448302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.448312 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:09Z","lastTransitionTime":"2026-01-31T04:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:09 crc kubenswrapper[4832]: E0131 04:44:09.468087 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:09Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.473503 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.473550 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.473590 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.473619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.473636 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:09Z","lastTransitionTime":"2026-01-31T04:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:09 crc kubenswrapper[4832]: E0131 04:44:09.493661 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:09Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.499222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.499318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.499416 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.499453 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.499519 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:09Z","lastTransitionTime":"2026-01-31T04:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:09 crc kubenswrapper[4832]: E0131 04:44:09.519984 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:09Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:09 crc kubenswrapper[4832]: E0131 04:44:09.520352 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.522661 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.522739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.522767 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.522798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.522822 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:09Z","lastTransitionTime":"2026-01-31T04:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.626127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.626187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.626204 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.626226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.626244 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:09Z","lastTransitionTime":"2026-01-31T04:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.729700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.729779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.729799 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.729831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.729850 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:09Z","lastTransitionTime":"2026-01-31T04:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.818926 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 12:25:51.503279911 +0000 UTC Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.833695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.833766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.833785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.833814 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.833835 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:09Z","lastTransitionTime":"2026-01-31T04:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.859053 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.859132 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.859125 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:09 crc kubenswrapper[4832]: E0131 04:44:09.859265 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:09 crc kubenswrapper[4832]: E0131 04:44:09.859355 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:09 crc kubenswrapper[4832]: E0131 04:44:09.859470 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.938165 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.938258 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.938284 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.938317 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:09 crc kubenswrapper[4832]: I0131 04:44:09.938337 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:09Z","lastTransitionTime":"2026-01-31T04:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.041829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.041879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.041893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.041918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.041934 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:10Z","lastTransitionTime":"2026-01-31T04:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.145431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.145518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.145545 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.145614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.145640 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:10Z","lastTransitionTime":"2026-01-31T04:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.249417 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.249523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.249540 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.249600 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.249621 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:10Z","lastTransitionTime":"2026-01-31T04:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.352499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.352536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.352546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.352578 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.352591 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:10Z","lastTransitionTime":"2026-01-31T04:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.455998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.456066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.456083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.456111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.456129 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:10Z","lastTransitionTime":"2026-01-31T04:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.558855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.558909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.558922 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.558942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.558961 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:10Z","lastTransitionTime":"2026-01-31T04:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.662079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.662119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.662133 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.662151 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.662164 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:10Z","lastTransitionTime":"2026-01-31T04:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.765417 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.765482 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.765505 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.765536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.765591 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:10Z","lastTransitionTime":"2026-01-31T04:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.819790 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:10:46.135202161 +0000 UTC Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.859381 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:10 crc kubenswrapper[4832]: E0131 04:44:10.859799 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.860905 4832 scope.go:117] "RemoveContainer" containerID="0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.868753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.868805 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.868820 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.868840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.868854 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:10Z","lastTransitionTime":"2026-01-31T04:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.971538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.971591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.971605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.971623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:10 crc kubenswrapper[4832]: I0131 04:44:10.971635 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:10Z","lastTransitionTime":"2026-01-31T04:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.075769 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.075824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.075843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.075871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.075892 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:11Z","lastTransitionTime":"2026-01-31T04:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.178671 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.178757 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.178775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.178803 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.178820 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:11Z","lastTransitionTime":"2026-01-31T04:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.282461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.282779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.282792 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.282809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.282820 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:11Z","lastTransitionTime":"2026-01-31T04:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.303946 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovnkube-controller/1.log" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.307131 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerStarted","Data":"24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb"} Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.307709 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.332392 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:48.384287 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 04:43:48.384377 6279 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 04:43:48.384419 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 04:43:48.384425 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 04:43:48.384467 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:43:48.384473 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:43:48.384516 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:43:48.384574 6279 factory.go:656] Stopping watch factory\\\\nI0131 04:43:48.384588 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:43:48.384599 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 04:43:48.384611 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 04:43:48.384615 6279 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:43:48.384620 6279 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 04:43:48.384623 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 04:43:48.384644 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:43:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.353705 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.368086 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.385829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.385861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.385872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.385887 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.385896 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:11Z","lastTransitionTime":"2026-01-31T04:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.387724 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.405083 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.418531 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.431500 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.452271 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.471493 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.489238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.489269 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.489278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.489294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.489304 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:11Z","lastTransitionTime":"2026-01-31T04:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.492738 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.511052 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c644ef4b-5bc9-4409-a444-17d51552531e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d5fc534e456b0e7ea58b7c97d2bfe663627b9eb62c1b84cd9b6f8b160fa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9b05a6852dfbb9f587c742d552bba3ba200137f6acbe99e84a0027b61f9140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bde28cc6a5118fb123864ccb17a4749aabb27e184261410d038beda918864f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.527644 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.545662 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.564091 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.589230 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.591167 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.591227 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.591241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.591263 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.591276 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:11Z","lastTransitionTime":"2026-01-31T04:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.618745 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.631097 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.643340 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.693979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.694015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.694025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.694044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.694056 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:11Z","lastTransitionTime":"2026-01-31T04:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.796117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.796169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.796187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.796213 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.796230 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:11Z","lastTransitionTime":"2026-01-31T04:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.820470 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:06:38.916463354 +0000 UTC Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.858395 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.858522 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:11 crc kubenswrapper[4832]: E0131 04:44:11.858666 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:11 crc kubenswrapper[4832]: E0131 04:44:11.858816 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.859264 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:11 crc kubenswrapper[4832]: E0131 04:44:11.859359 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.889011 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.898764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.898815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.898828 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.898845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.898858 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:11Z","lastTransitionTime":"2026-01-31T04:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.903625 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.918002 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.933876 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.947375 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.971440 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:11 crc kubenswrapper[4832]: I0131 04:44:11.997319 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:48.384287 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 04:43:48.384377 6279 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 04:43:48.384419 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 04:43:48.384425 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 04:43:48.384467 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:43:48.384473 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:43:48.384516 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:43:48.384574 6279 factory.go:656] Stopping watch factory\\\\nI0131 04:43:48.384588 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:43:48.384599 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 04:43:48.384611 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 04:43:48.384615 6279 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:43:48.384620 6279 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 04:43:48.384623 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 04:43:48.384644 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:43:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:11Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.000802 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.000853 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.000863 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.000879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.000889 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:12Z","lastTransitionTime":"2026-01-31T04:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.007329 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.019912 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.036777 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.049862 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.062973 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.076133 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.086134 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.096702 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.110537 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.111164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.111192 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.111203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.111221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.111232 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:12Z","lastTransitionTime":"2026-01-31T04:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.121578 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.131265 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c644ef4b-5bc9-4409-a444-17d51552531e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d5fc534e456b0e7ea58b7c97d2bfe663627b9eb62c1b84cd9b6f8b160fa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9b05a6852dfbb9f587c742d552bba3ba200137f6acbe99e84a0027b61f9140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bde28cc6a5118fb123864ccb17a4749aabb27e184261410d038beda918864f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.212880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.213336 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.213412 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.213476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.213534 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:12Z","lastTransitionTime":"2026-01-31T04:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.313292 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovnkube-controller/2.log" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.314673 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovnkube-controller/1.log" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.315442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.315549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.315633 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.315912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.316032 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:12Z","lastTransitionTime":"2026-01-31T04:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.317813 4832 generic.go:334] "Generic (PLEG): container finished" podID="e089fa33-e032-4755-8b7e-262adfecc82f" containerID="24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb" exitCode=1 Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.317855 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerDied","Data":"24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb"} Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.317896 4832 scope.go:117] "RemoveContainer" containerID="0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.318990 4832 scope.go:117] "RemoveContainer" containerID="24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb" Jan 31 04:44:12 crc kubenswrapper[4832]: E0131 04:44:12.319719 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.335982 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.370906 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0839edaa49753865d416d37eb7dc166988feaa29a33733c69c4505b7cbd112f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 04:43:48.384287 6279 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 04:43:48.384377 6279 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 04:43:48.384419 6279 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 04:43:48.384425 6279 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 04:43:48.384467 6279 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 04:43:48.384473 6279 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 04:43:48.384516 6279 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 04:43:48.384574 6279 factory.go:656] Stopping watch factory\\\\nI0131 04:43:48.384588 6279 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 04:43:48.384599 6279 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 04:43:48.384611 6279 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 04:43:48.384615 6279 ovnkube.go:599] Stopped ovnkube\\\\nI0131 04:43:48.384620 6279 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 04:43:48.384623 6279 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 04:43:48.384644 6279 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 04:43:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:12Z\\\",\\\"message\\\":\\\"131 04:44:12.019434 6531 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019442 6531 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0131 04:44:12.019447 6531 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0131 04:44:12.019452 6531 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019308 6531 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:44:12.019489 6531 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:44:12.019517 6531 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.385961 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.400410 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.414978 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.420204 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.420248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.420260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.420282 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.420294 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:12Z","lastTransitionTime":"2026-01-31T04:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.432475 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.443088 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.455693 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.468157 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.480149 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.493055 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.507182 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c644ef4b-5bc9-4409-a444-17d51552531e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d5fc534e456b0e7ea58b7c97d2bfe663627b9eb62c1b84cd9b6f8b160fa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9b05a6852dfbb9f587c742d552bba3ba200137f6acbe99e84a0027b61f9140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bde28cc6a5118fb123864ccb17a4749aabb27e184261410d038beda918864f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.521885 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.522615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.522672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.522684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.522704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.522720 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:12Z","lastTransitionTime":"2026-01-31T04:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.535656 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.553317 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.564881 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.583685 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.594005 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:12Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.627179 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.627221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.627231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.627252 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.627264 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:12Z","lastTransitionTime":"2026-01-31T04:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.730501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.731149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.731380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.731600 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.731822 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:12Z","lastTransitionTime":"2026-01-31T04:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.821588 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:58:16.406190009 +0000 UTC Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.834967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.835033 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.835052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.835079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.835098 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:12Z","lastTransitionTime":"2026-01-31T04:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.859396 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:12 crc kubenswrapper[4832]: E0131 04:44:12.859581 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.938706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.938754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.938766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.938785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:12 crc kubenswrapper[4832]: I0131 04:44:12.938798 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:12Z","lastTransitionTime":"2026-01-31T04:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.042162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.042238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.042258 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.042287 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.042306 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:13Z","lastTransitionTime":"2026-01-31T04:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.145838 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.146206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.146338 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.146466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.146682 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:13Z","lastTransitionTime":"2026-01-31T04:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.250017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.250083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.250101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.250124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.250145 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:13Z","lastTransitionTime":"2026-01-31T04:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.324200 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovnkube-controller/2.log" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.329794 4832 scope.go:117] "RemoveContainer" containerID="24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb" Jan 31 04:44:13 crc kubenswrapper[4832]: E0131 04:44:13.329978 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.351009 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.353267 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.353309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.353325 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.353345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.353357 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:13Z","lastTransitionTime":"2026-01-31T04:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.375789 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.395372 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.418172 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.439716 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.456023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.456126 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.456144 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.456450 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.456481 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:13Z","lastTransitionTime":"2026-01-31T04:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.463536 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.477977 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.499116 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.514684 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.534643 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c644ef4b-5bc9-4409-a444-17d51552531e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d5fc534e456b0e7ea58b7c97d2bfe663627b9eb62c1b84cd9b6f8b160fa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9b05a6852dfbb9f587c742d552bba3ba200137f6acbe99e84a0027b61f9140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bde28cc6a5118fb123864ccb17a4749aabb27e184261410d038beda918864f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.551639 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.559482 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.559511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.559519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.559537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.559546 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:13Z","lastTransitionTime":"2026-01-31T04:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.582625 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.599534 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.617437 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.636028 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.649013 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.662443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.662483 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.662493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.662511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.662522 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:13Z","lastTransitionTime":"2026-01-31T04:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.670587 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.703036 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:12Z\\\",\\\"message\\\":\\\"131 04:44:12.019434 6531 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019442 6531 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0131 04:44:12.019447 6531 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0131 04:44:12.019452 6531 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019308 6531 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:44:12.019489 6531 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:44:12.019517 6531 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:44:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:13Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.765700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.765791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.765818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.765972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.766033 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:13Z","lastTransitionTime":"2026-01-31T04:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.822415 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 23:59:01.969328487 +0000 UTC Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.859289 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:13 crc kubenswrapper[4832]: E0131 04:44:13.859718 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.859505 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:13 crc kubenswrapper[4832]: E0131 04:44:13.859946 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.859312 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:13 crc kubenswrapper[4832]: E0131 04:44:13.860745 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.869328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.869367 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.869380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.869399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.869411 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:13Z","lastTransitionTime":"2026-01-31T04:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.974089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.974344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.974406 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.974528 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:13 crc kubenswrapper[4832]: I0131 04:44:13.974727 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:13Z","lastTransitionTime":"2026-01-31T04:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.077670 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.077718 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.077727 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.077742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.077752 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:14Z","lastTransitionTime":"2026-01-31T04:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.180603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.180654 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.180670 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.180691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.180706 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:14Z","lastTransitionTime":"2026-01-31T04:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.284803 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.284852 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.284864 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.284886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.284902 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:14Z","lastTransitionTime":"2026-01-31T04:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.387840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.388600 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.388728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.388820 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.388905 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:14Z","lastTransitionTime":"2026-01-31T04:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.491817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.491853 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.491862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.491877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.491887 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:14Z","lastTransitionTime":"2026-01-31T04:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.595379 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.595459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.595472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.595519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.595531 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:14Z","lastTransitionTime":"2026-01-31T04:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.699097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.699158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.699176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.699203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.699220 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:14Z","lastTransitionTime":"2026-01-31T04:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.803019 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.803091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.803118 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.803145 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.803162 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:14Z","lastTransitionTime":"2026-01-31T04:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.823000 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:19:46.923905917 +0000 UTC Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.859347 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:14 crc kubenswrapper[4832]: E0131 04:44:14.859909 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.928705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.928762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.928777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.928797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:14 crc kubenswrapper[4832]: I0131 04:44:14.928811 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:14Z","lastTransitionTime":"2026-01-31T04:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.031814 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.031883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.031902 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.031926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.031944 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:15Z","lastTransitionTime":"2026-01-31T04:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.136815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.136885 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.136911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.136943 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.136966 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:15Z","lastTransitionTime":"2026-01-31T04:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.240014 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.240062 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.240077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.240097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.240109 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:15Z","lastTransitionTime":"2026-01-31T04:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.342868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.342934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.342955 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.342981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.343002 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:15Z","lastTransitionTime":"2026-01-31T04:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.446368 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.446443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.446472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.446504 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.446525 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:15Z","lastTransitionTime":"2026-01-31T04:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.549752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.549810 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.549821 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.549840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.549854 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:15Z","lastTransitionTime":"2026-01-31T04:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.652661 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.652749 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.652765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.652789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.652802 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:15Z","lastTransitionTime":"2026-01-31T04:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.756048 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.756099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.756111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.756134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.756147 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:15Z","lastTransitionTime":"2026-01-31T04:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.823757 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:13:01.35335674 +0000 UTC Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.858426 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.858434 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.858436 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:15 crc kubenswrapper[4832]: E0131 04:44:15.858680 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.858741 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.858766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.858783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.858803 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.858815 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:15Z","lastTransitionTime":"2026-01-31T04:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:15 crc kubenswrapper[4832]: E0131 04:44:15.858877 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:15 crc kubenswrapper[4832]: E0131 04:44:15.859947 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.969107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.969181 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.969194 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.969212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:15 crc kubenswrapper[4832]: I0131 04:44:15.969223 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:15Z","lastTransitionTime":"2026-01-31T04:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.071415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.071468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.071484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.071513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.071531 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:16Z","lastTransitionTime":"2026-01-31T04:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.175281 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.175761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.176023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.176205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.176340 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:16Z","lastTransitionTime":"2026-01-31T04:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.279115 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.279172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.279182 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.279198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.279208 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:16Z","lastTransitionTime":"2026-01-31T04:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.382628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.382666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.382678 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.382695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.382706 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:16Z","lastTransitionTime":"2026-01-31T04:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.485387 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.486256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.486387 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.486507 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.486701 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:16Z","lastTransitionTime":"2026-01-31T04:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.589354 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.589391 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.589401 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.589419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.589431 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:16Z","lastTransitionTime":"2026-01-31T04:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.691712 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.691804 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.691831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.691865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.691883 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:16Z","lastTransitionTime":"2026-01-31T04:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.794883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.794985 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.795004 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.795068 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.795087 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:16Z","lastTransitionTime":"2026-01-31T04:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.824946 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:39:53.304925998 +0000 UTC Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.858796 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:16 crc kubenswrapper[4832]: E0131 04:44:16.859217 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.897908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.897948 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.897960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.897979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:16 crc kubenswrapper[4832]: I0131 04:44:16.897991 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:16Z","lastTransitionTime":"2026-01-31T04:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.000273 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.000336 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.000348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.000365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.000376 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:17Z","lastTransitionTime":"2026-01-31T04:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.103408 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.103437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.103446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.103461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.103473 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:17Z","lastTransitionTime":"2026-01-31T04:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.206794 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.206865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.206888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.206915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.206932 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:17Z","lastTransitionTime":"2026-01-31T04:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.310131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.310171 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.310181 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.310197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.310206 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:17Z","lastTransitionTime":"2026-01-31T04:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.412943 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.413000 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.413015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.413040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.413057 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:17Z","lastTransitionTime":"2026-01-31T04:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.514944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.514977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.514986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.515001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.515010 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:17Z","lastTransitionTime":"2026-01-31T04:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.618093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.618140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.618150 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.618165 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.618176 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:17Z","lastTransitionTime":"2026-01-31T04:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.721127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.721156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.721165 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.721180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.721188 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:17Z","lastTransitionTime":"2026-01-31T04:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.823661 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.823696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.823704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.823719 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.823727 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:17Z","lastTransitionTime":"2026-01-31T04:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.826035 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:54:34.166542789 +0000 UTC Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.859129 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.859172 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.859144 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:17 crc kubenswrapper[4832]: E0131 04:44:17.859313 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:17 crc kubenswrapper[4832]: E0131 04:44:17.859400 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:17 crc kubenswrapper[4832]: E0131 04:44:17.859613 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.926945 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.926986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.926996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.927014 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:17 crc kubenswrapper[4832]: I0131 04:44:17.927025 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:17Z","lastTransitionTime":"2026-01-31T04:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.030145 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.030218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.030232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.030255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.030267 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:18Z","lastTransitionTime":"2026-01-31T04:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.132693 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.132751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.132765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.132783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.132794 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:18Z","lastTransitionTime":"2026-01-31T04:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.235358 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.235390 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.235400 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.235415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.235423 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:18Z","lastTransitionTime":"2026-01-31T04:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.337931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.337978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.337987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.338003 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.338016 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:18Z","lastTransitionTime":"2026-01-31T04:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.441534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.441605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.441614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.441630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.441639 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:18Z","lastTransitionTime":"2026-01-31T04:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.544363 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.544431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.544443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.544467 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.544483 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:18Z","lastTransitionTime":"2026-01-31T04:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.647394 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.647440 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.647451 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.647470 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.647483 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:18Z","lastTransitionTime":"2026-01-31T04:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.749916 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.749977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.749987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.750011 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.750028 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:18Z","lastTransitionTime":"2026-01-31T04:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.826910 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 20:08:04.017863471 +0000 UTC Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.852383 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.852423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.852433 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.852449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.852463 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:18Z","lastTransitionTime":"2026-01-31T04:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.858666 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:18 crc kubenswrapper[4832]: E0131 04:44:18.858794 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.954921 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.954975 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.954984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.955004 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:18 crc kubenswrapper[4832]: I0131 04:44:18.955016 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:18Z","lastTransitionTime":"2026-01-31T04:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.057515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.057749 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.057768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.057787 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.057799 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:19Z","lastTransitionTime":"2026-01-31T04:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.160134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.160172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.160183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.160200 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.160212 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:19Z","lastTransitionTime":"2026-01-31T04:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.262208 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.262259 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.262273 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.262296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.262310 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:19Z","lastTransitionTime":"2026-01-31T04:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.364947 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.364980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.364989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.365004 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.365013 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:19Z","lastTransitionTime":"2026-01-31T04:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.467528 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.467609 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.467622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.467641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.467659 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:19Z","lastTransitionTime":"2026-01-31T04:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.570396 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.570459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.570468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.570487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.570498 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:19Z","lastTransitionTime":"2026-01-31T04:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.672996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.673024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.673032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.673048 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.673059 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:19Z","lastTransitionTime":"2026-01-31T04:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.743168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.743221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.743233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.743252 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.743263 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:19Z","lastTransitionTime":"2026-01-31T04:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:19 crc kubenswrapper[4832]: E0131 04:44:19.758492 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:19Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.762728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.762772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.762784 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.762802 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.762815 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:19Z","lastTransitionTime":"2026-01-31T04:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:19 crc kubenswrapper[4832]: E0131 04:44:19.777699 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:19Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.781740 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.781785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.781821 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.781840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.781857 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:19Z","lastTransitionTime":"2026-01-31T04:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:19 crc kubenswrapper[4832]: E0131 04:44:19.793649 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:19Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.797256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.797293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.797302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.797320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.797332 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:19Z","lastTransitionTime":"2026-01-31T04:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:19 crc kubenswrapper[4832]: E0131 04:44:19.808467 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:19Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.816202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.816250 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.816263 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.816291 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.816302 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:19Z","lastTransitionTime":"2026-01-31T04:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.827802 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:47:22.269035039 +0000 UTC Jan 31 04:44:19 crc kubenswrapper[4832]: E0131 04:44:19.832647 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:19Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:19 crc kubenswrapper[4832]: E0131 04:44:19.832751 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.834589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.834620 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.834630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.834648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.834659 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:19Z","lastTransitionTime":"2026-01-31T04:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.859144 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:19 crc kubenswrapper[4832]: E0131 04:44:19.859345 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.859876 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:19 crc kubenswrapper[4832]: E0131 04:44:19.859948 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.859995 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:19 crc kubenswrapper[4832]: E0131 04:44:19.860036 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.936820 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.936846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.936855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.936870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:19 crc kubenswrapper[4832]: I0131 04:44:19.936880 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:19Z","lastTransitionTime":"2026-01-31T04:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.039280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.039308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.039317 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.039331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.039340 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:20Z","lastTransitionTime":"2026-01-31T04:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.092359 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs\") pod \"network-metrics-daemon-rbg9h\" (UID: \"88205cd8-6bbf-40af-a0d1-bfae431d97e7\") " pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:20 crc kubenswrapper[4832]: E0131 04:44:20.092500 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:44:20 crc kubenswrapper[4832]: E0131 04:44:20.092553 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs podName:88205cd8-6bbf-40af-a0d1-bfae431d97e7 nodeName:}" failed. No retries permitted until 2026-01-31 04:44:52.092540381 +0000 UTC m=+101.041362066 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs") pod "network-metrics-daemon-rbg9h" (UID: "88205cd8-6bbf-40af-a0d1-bfae431d97e7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.141704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.141748 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.141765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.141789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.141809 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:20Z","lastTransitionTime":"2026-01-31T04:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.244525 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.244637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.244649 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.244670 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.244682 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:20Z","lastTransitionTime":"2026-01-31T04:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.347300 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.347368 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.347378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.347419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.347431 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:20Z","lastTransitionTime":"2026-01-31T04:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.449226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.449265 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.449277 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.449291 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.449303 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:20Z","lastTransitionTime":"2026-01-31T04:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.551594 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.551638 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.551648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.551662 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.551672 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:20Z","lastTransitionTime":"2026-01-31T04:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.655004 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.655064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.655077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.655098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.655111 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:20Z","lastTransitionTime":"2026-01-31T04:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.758549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.758606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.758618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.758635 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.758646 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:20Z","lastTransitionTime":"2026-01-31T04:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.828373 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:27:28.753252769 +0000 UTC Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.858794 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:20 crc kubenswrapper[4832]: E0131 04:44:20.859003 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.862372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.862406 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.862422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.862438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.862450 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:20Z","lastTransitionTime":"2026-01-31T04:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.872343 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.966081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.966122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.966133 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.966152 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:20 crc kubenswrapper[4832]: I0131 04:44:20.966164 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:20Z","lastTransitionTime":"2026-01-31T04:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.068701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.068752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.068763 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.068780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.068793 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:21Z","lastTransitionTime":"2026-01-31T04:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.171151 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.171221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.171240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.171268 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.171287 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:21Z","lastTransitionTime":"2026-01-31T04:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.274446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.274496 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.274508 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.274535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.274549 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:21Z","lastTransitionTime":"2026-01-31T04:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.377696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.377739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.377748 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.377764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.377774 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:21Z","lastTransitionTime":"2026-01-31T04:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.480431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.480486 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.480499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.480518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.480530 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:21Z","lastTransitionTime":"2026-01-31T04:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.582749 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.582799 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.582809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.582826 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.582838 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:21Z","lastTransitionTime":"2026-01-31T04:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.685118 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.685156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.685168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.685185 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.685197 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:21Z","lastTransitionTime":"2026-01-31T04:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.788009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.788045 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.788056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.788071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.788084 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:21Z","lastTransitionTime":"2026-01-31T04:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.829036 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:05:56.520012819 +0000 UTC Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.858306 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.858325 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:21 crc kubenswrapper[4832]: E0131 04:44:21.858463 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.858338 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:21 crc kubenswrapper[4832]: E0131 04:44:21.858725 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:21 crc kubenswrapper[4832]: E0131 04:44:21.859779 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.882596 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.890637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.890683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.890696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.890715 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.890728 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:21Z","lastTransitionTime":"2026-01-31T04:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.894215 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.904490 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.914704 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.925610 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.938709 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.957852 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:12Z\\\",\\\"message\\\":\\\"131 04:44:12.019434 6531 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019442 6531 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0131 04:44:12.019447 6531 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0131 04:44:12.019452 6531 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019308 6531 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:44:12.019489 6531 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:44:12.019517 6531 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:44:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.969097 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.979761 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.991454 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:21Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.992221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.992278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.992289 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.992335 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:21 crc kubenswrapper[4832]: I0131 04:44:21.992350 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:21Z","lastTransitionTime":"2026-01-31T04:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.006311 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.018326 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.029268 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.038325 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a20a7c-a8bf-48de-a1ef-bd44f628935c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898c1f114ebe6e59fe285ebc316ae02920bdfa83d25af0c42a71146e1ffe0a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.051804 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.064333 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.077105 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c644ef4b-5bc9-4409-a444-17d51552531e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d5fc534e456b0e7ea58b7c97d2bfe663627b9eb62c1b84cd9b6f8b160fa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9b05a6852dfbb9f587c742d552bba3ba200137f6acbe99e84a0027b61f9140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bde28cc6a5118fb123864ccb17a4749aabb27e184261410d038beda918864f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.086610 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.095960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.096006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.096016 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.096038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.096053 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:22Z","lastTransitionTime":"2026-01-31T04:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.098308 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:22Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.199527 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.199587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.199597 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.199613 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.199626 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:22Z","lastTransitionTime":"2026-01-31T04:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.302588 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.302623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.302632 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.302650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.302660 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:22Z","lastTransitionTime":"2026-01-31T04:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.406236 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.406293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.406305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.406323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.406335 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:22Z","lastTransitionTime":"2026-01-31T04:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.508904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.508955 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.508964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.508984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.508995 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:22Z","lastTransitionTime":"2026-01-31T04:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.611369 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.611417 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.611432 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.611456 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.611472 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:22Z","lastTransitionTime":"2026-01-31T04:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.714390 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.714427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.714436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.714451 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.714462 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:22Z","lastTransitionTime":"2026-01-31T04:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.817012 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.817059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.817072 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.817090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.817101 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:22Z","lastTransitionTime":"2026-01-31T04:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.829450 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:10:18.471039826 +0000 UTC Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.858730 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:22 crc kubenswrapper[4832]: E0131 04:44:22.858854 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.920047 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.920103 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.920118 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.920137 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:22 crc kubenswrapper[4832]: I0131 04:44:22.920150 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:22Z","lastTransitionTime":"2026-01-31T04:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.023617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.023676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.023694 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.023721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.023741 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:23Z","lastTransitionTime":"2026-01-31T04:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.126156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.126200 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.126216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.126235 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.126248 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:23Z","lastTransitionTime":"2026-01-31T04:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.230841 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.230881 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.230910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.230931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.230944 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:23Z","lastTransitionTime":"2026-01-31T04:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.334615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.334650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.334662 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.334683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.334697 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:23Z","lastTransitionTime":"2026-01-31T04:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.437449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.437497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.437509 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.437525 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.437534 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:23Z","lastTransitionTime":"2026-01-31T04:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.540074 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.540123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.540141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.540162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.540175 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:23Z","lastTransitionTime":"2026-01-31T04:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.642402 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.642458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.642470 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.642492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.642506 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:23Z","lastTransitionTime":"2026-01-31T04:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.745846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.745916 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.745926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.745953 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.745965 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:23Z","lastTransitionTime":"2026-01-31T04:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.829693 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:29:32.770468749 +0000 UTC Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.848223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.848264 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.848272 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.848292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.848304 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:23Z","lastTransitionTime":"2026-01-31T04:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.858706 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.858772 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:23 crc kubenswrapper[4832]: E0131 04:44:23.858828 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.858890 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:23 crc kubenswrapper[4832]: E0131 04:44:23.858986 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:23 crc kubenswrapper[4832]: E0131 04:44:23.859189 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.951651 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.951728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.951746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.951776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:23 crc kubenswrapper[4832]: I0131 04:44:23.951794 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:23Z","lastTransitionTime":"2026-01-31T04:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.055442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.055514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.055525 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.055544 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.055554 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:24Z","lastTransitionTime":"2026-01-31T04:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.158076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.158113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.158123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.158139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.158148 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:24Z","lastTransitionTime":"2026-01-31T04:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.260064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.260101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.260113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.260128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.260139 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:24Z","lastTransitionTime":"2026-01-31T04:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.362102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.362149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.362166 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.362193 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.362210 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:24Z","lastTransitionTime":"2026-01-31T04:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.363825 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-frk6z_df4dafae-fa72-4f03-8531-93538336b0cd/kube-multus/0.log" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.363883 4832 generic.go:334] "Generic (PLEG): container finished" podID="df4dafae-fa72-4f03-8531-93538336b0cd" containerID="fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1" exitCode=1 Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.363917 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-frk6z" event={"ID":"df4dafae-fa72-4f03-8531-93538336b0cd","Type":"ContainerDied","Data":"fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1"} Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.364388 4832 scope.go:117] "RemoveContainer" containerID="fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.379400 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.394625 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.408340 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.425998 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.441499 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.455245 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.463970 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.463992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.464001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.464017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.464028 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:24Z","lastTransitionTime":"2026-01-31T04:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.467879 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:23Z\\\",\\\"message\\\":\\\"2026-01-31T04:43:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526\\\\n2026-01-31T04:43:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526 to /host/opt/cni/bin/\\\\n2026-01-31T04:43:38Z [verbose] multus-daemon started\\\\n2026-01-31T04:43:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:44:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.477893 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a20a7c-a8bf-48de-a1ef-bd44f628935c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898c1f114ebe6e59fe285ebc316ae02920bdfa83d25af0c42a71146e1ffe0a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.490695 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.503396 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.514569 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c644ef4b-5bc9-4409-a444-17d51552531e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d5fc534e456b0e7ea58b7c97d2bfe663627b9eb62c1b84cd9b6f8b160fa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9b05a6852dfbb9f587c742d552bba3ba200137f6acbe99e84a0027b61f9140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bde28cc6a5118fb123864ccb17a4749aabb27e184261410d038beda918864f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.524468 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.542738 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.552247 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.560763 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.566871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.566905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.566917 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.566937 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.566950 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:24Z","lastTransitionTime":"2026-01-31T04:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.572197 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.582717 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.595435 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.611475 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:12Z\\\",\\\"message\\\":\\\"131 04:44:12.019434 6531 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019442 6531 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0131 04:44:12.019447 6531 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0131 04:44:12.019452 6531 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019308 6531 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:44:12.019489 6531 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:44:12.019517 6531 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:44:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:24Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.668979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.669048 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.669064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.669110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.669127 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:24Z","lastTransitionTime":"2026-01-31T04:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.772077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.772119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.772132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.772148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.772158 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:24Z","lastTransitionTime":"2026-01-31T04:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.830806 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 23:48:45.431965253 +0000 UTC Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.858585 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:24 crc kubenswrapper[4832]: E0131 04:44:24.859151 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.859324 4832 scope.go:117] "RemoveContainer" containerID="24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb" Jan 31 04:44:24 crc kubenswrapper[4832]: E0131 04:44:24.859693 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.875004 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.875057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.875070 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.875091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.875107 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:24Z","lastTransitionTime":"2026-01-31T04:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.977173 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.977216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.977226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.977243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:24 crc kubenswrapper[4832]: I0131 04:44:24.977252 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:24Z","lastTransitionTime":"2026-01-31T04:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.079940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.079984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.079994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.080013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.080023 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:25Z","lastTransitionTime":"2026-01-31T04:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.182718 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.182754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.182762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.182779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.182788 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:25Z","lastTransitionTime":"2026-01-31T04:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.285520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.285595 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.285609 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.285631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.285645 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:25Z","lastTransitionTime":"2026-01-31T04:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.369778 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-frk6z_df4dafae-fa72-4f03-8531-93538336b0cd/kube-multus/0.log" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.369847 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-frk6z" event={"ID":"df4dafae-fa72-4f03-8531-93538336b0cd","Type":"ContainerStarted","Data":"2280a590a254679f2240bf2fb2aa06633b56575b436f18ea8be3bd8912598faf"} Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.387797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.388252 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.388296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.388318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.388391 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:25Z","lastTransitionTime":"2026-01-31T04:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.390331 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.402990 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.414492 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.430410 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.446150 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.462476 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.482093 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:12Z\\\",\\\"message\\\":\\\"131 04:44:12.019434 6531 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019442 6531 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0131 04:44:12.019447 6531 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0131 04:44:12.019452 6531 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019308 6531 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:44:12.019489 6531 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:44:12.019517 6531 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:44:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.491325 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.491384 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.491397 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.491417 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.491435 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:25Z","lastTransitionTime":"2026-01-31T04:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.498019 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.510186 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.523871 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.537699 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.551570 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.565537 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.578220 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a20a7c-a8bf-48de-a1ef-bd44f628935c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898c1f114ebe6e59fe285ebc316ae02920bdfa83d25af0c42a71146e1ffe0a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.593309 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.594734 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.594779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.594789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.594807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.594817 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:25Z","lastTransitionTime":"2026-01-31T04:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.610922 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.625695 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c644ef4b-5bc9-4409-a444-17d51552531e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d5fc534e456b0e7ea58b7c97d2bfe663627b9eb62c1b84cd9b6f8b160fa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9b05a6852dfbb9f587c742d552bba3ba200137f6acbe99e84a0027b61f9140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bde28cc6a5118fb123864ccb17a4749aabb27e184261410d038beda918864f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.637374 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.651743 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2280a590a254679f2240bf2fb2aa06633b56575b436f18ea8be3bd8912598faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:23Z\\\",\\\"message\\\":\\\"2026-01-31T04:43:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526\\\\n2026-01-31T04:43:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526 to /host/opt/cni/bin/\\\\n2026-01-31T04:43:38Z [verbose] multus-daemon started\\\\n2026-01-31T04:43:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:44:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:25Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.698032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.698082 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.698093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.698110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.698121 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:25Z","lastTransitionTime":"2026-01-31T04:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.801192 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.801238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.801248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.801265 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.801277 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:25Z","lastTransitionTime":"2026-01-31T04:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.831798 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:30:51.173046651 +0000 UTC Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.858930 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.859033 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.858987 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:25 crc kubenswrapper[4832]: E0131 04:44:25.859295 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:25 crc kubenswrapper[4832]: E0131 04:44:25.859470 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:25 crc kubenswrapper[4832]: E0131 04:44:25.859696 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.904696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.904771 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.904784 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.904811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:25 crc kubenswrapper[4832]: I0131 04:44:25.904826 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:25Z","lastTransitionTime":"2026-01-31T04:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.007477 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.007540 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.007558 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.007602 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.007617 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:26Z","lastTransitionTime":"2026-01-31T04:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.110494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.110642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.110662 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.110764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.110785 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:26Z","lastTransitionTime":"2026-01-31T04:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.214352 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.214861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.215079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.215248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.215392 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:26Z","lastTransitionTime":"2026-01-31T04:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.318958 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.319026 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.319046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.319073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.319094 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:26Z","lastTransitionTime":"2026-01-31T04:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.422501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.422549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.422560 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.422592 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.422607 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:26Z","lastTransitionTime":"2026-01-31T04:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.525379 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.525644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.525757 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.525865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.525950 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:26Z","lastTransitionTime":"2026-01-31T04:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.629322 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.629371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.629380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.629397 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.629428 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:26Z","lastTransitionTime":"2026-01-31T04:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.731871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.731909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.731918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.731934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.731944 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:26Z","lastTransitionTime":"2026-01-31T04:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.832515 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:01:28.401867985 +0000 UTC Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.842859 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.842934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.842957 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.842989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.843013 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:26Z","lastTransitionTime":"2026-01-31T04:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.858467 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:26 crc kubenswrapper[4832]: E0131 04:44:26.858726 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.946007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.946074 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.946089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.946113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:26 crc kubenswrapper[4832]: I0131 04:44:26.946127 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:26Z","lastTransitionTime":"2026-01-31T04:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.048325 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.048384 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.048396 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.048415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.048428 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:27Z","lastTransitionTime":"2026-01-31T04:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.150794 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.150864 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.150878 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.150896 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.150908 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:27Z","lastTransitionTime":"2026-01-31T04:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.253840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.253885 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.253896 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.253912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.253922 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:27Z","lastTransitionTime":"2026-01-31T04:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.356394 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.356452 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.356461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.356478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.356489 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:27Z","lastTransitionTime":"2026-01-31T04:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.459519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.459619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.459638 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.459662 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.459678 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:27Z","lastTransitionTime":"2026-01-31T04:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.562523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.562593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.562610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.562628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.562639 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:27Z","lastTransitionTime":"2026-01-31T04:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.665707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.665769 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.665783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.665799 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.665809 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:27Z","lastTransitionTime":"2026-01-31T04:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.768448 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.768492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.768503 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.768523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.768535 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:27Z","lastTransitionTime":"2026-01-31T04:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.833223 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:56:01.20984539 +0000 UTC Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.859040 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.859217 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.859222 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:27 crc kubenswrapper[4832]: E0131 04:44:27.859390 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:27 crc kubenswrapper[4832]: E0131 04:44:27.859682 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:27 crc kubenswrapper[4832]: E0131 04:44:27.859746 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.870630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.870659 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.870668 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.870682 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.870694 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:27Z","lastTransitionTime":"2026-01-31T04:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.974300 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.974377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.974393 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.974417 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:27 crc kubenswrapper[4832]: I0131 04:44:27.974434 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:27Z","lastTransitionTime":"2026-01-31T04:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.078253 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.078369 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.078389 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.078420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.078441 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:28Z","lastTransitionTime":"2026-01-31T04:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.181846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.181893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.181905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.181924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.181938 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:28Z","lastTransitionTime":"2026-01-31T04:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.285371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.285419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.285431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.285448 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.285514 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:28Z","lastTransitionTime":"2026-01-31T04:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.388333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.388418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.388443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.388488 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.388510 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:28Z","lastTransitionTime":"2026-01-31T04:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.491939 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.492001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.492017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.492041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.492058 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:28Z","lastTransitionTime":"2026-01-31T04:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.595427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.595474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.595488 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.595506 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.595519 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:28Z","lastTransitionTime":"2026-01-31T04:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.698522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.698620 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.698637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.698664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.698681 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:28Z","lastTransitionTime":"2026-01-31T04:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.802059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.802167 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.802177 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.802193 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.802204 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:28Z","lastTransitionTime":"2026-01-31T04:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.833983 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:18:12.195945276 +0000 UTC Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.858360 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:28 crc kubenswrapper[4832]: E0131 04:44:28.858493 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.904834 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.904905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.904924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.904951 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:28 crc kubenswrapper[4832]: I0131 04:44:28.904970 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:28Z","lastTransitionTime":"2026-01-31T04:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.007817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.007873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.007885 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.007905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.007918 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:29Z","lastTransitionTime":"2026-01-31T04:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.110238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.110291 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.110308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.110326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.110380 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:29Z","lastTransitionTime":"2026-01-31T04:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.213907 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.214027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.214078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.214107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.214124 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:29Z","lastTransitionTime":"2026-01-31T04:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.317366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.317422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.317433 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.317468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.317486 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:29Z","lastTransitionTime":"2026-01-31T04:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.419815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.419849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.419858 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.419874 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.419884 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:29Z","lastTransitionTime":"2026-01-31T04:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.523245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.523308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.523327 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.523355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.523372 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:29Z","lastTransitionTime":"2026-01-31T04:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.626890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.626945 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.626962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.626986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.627004 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:29Z","lastTransitionTime":"2026-01-31T04:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.730491 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.730538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.730606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.730653 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.730672 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:29Z","lastTransitionTime":"2026-01-31T04:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.833541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.833632 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.833652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.833681 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.833702 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:29Z","lastTransitionTime":"2026-01-31T04:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.834359 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:09:00.388349329 +0000 UTC Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.859393 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.859459 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.859412 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:29 crc kubenswrapper[4832]: E0131 04:44:29.859626 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:29 crc kubenswrapper[4832]: E0131 04:44:29.859712 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:29 crc kubenswrapper[4832]: E0131 04:44:29.859784 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.936430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.936478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.936489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.936511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:29 crc kubenswrapper[4832]: I0131 04:44:29.936524 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:29Z","lastTransitionTime":"2026-01-31T04:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.039986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.040042 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.040059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.040084 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.040101 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:30Z","lastTransitionTime":"2026-01-31T04:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.129401 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.129508 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.129526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.129623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.129650 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:30Z","lastTransitionTime":"2026-01-31T04:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:30 crc kubenswrapper[4832]: E0131 04:44:30.148699 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.153186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.153728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.153771 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.153798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.153816 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:30Z","lastTransitionTime":"2026-01-31T04:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:30 crc kubenswrapper[4832]: E0131 04:44:30.176146 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.180315 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.180359 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.180373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.180390 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.180399 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:30Z","lastTransitionTime":"2026-01-31T04:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:30 crc kubenswrapper[4832]: E0131 04:44:30.193029 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.197308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.197346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.197360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.197377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.197386 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:30Z","lastTransitionTime":"2026-01-31T04:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:30 crc kubenswrapper[4832]: E0131 04:44:30.208874 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.212868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.212922 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.212936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.212956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.212969 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:30Z","lastTransitionTime":"2026-01-31T04:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:30 crc kubenswrapper[4832]: E0131 04:44:30.242141 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:30Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:30 crc kubenswrapper[4832]: E0131 04:44:30.242328 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.250438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.250489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.250497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.250514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.250524 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:30Z","lastTransitionTime":"2026-01-31T04:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.353264 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.353321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.353334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.353354 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.353367 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:30Z","lastTransitionTime":"2026-01-31T04:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.456307 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.456368 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.456379 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.456398 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.456409 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:30Z","lastTransitionTime":"2026-01-31T04:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.559102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.559153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.559167 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.559197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.559207 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:30Z","lastTransitionTime":"2026-01-31T04:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.661536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.661572 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.661606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.661624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.661637 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:30Z","lastTransitionTime":"2026-01-31T04:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.764745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.764801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.764815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.764836 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.764850 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:30Z","lastTransitionTime":"2026-01-31T04:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.834663 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:03:56.366835687 +0000 UTC Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.858429 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:30 crc kubenswrapper[4832]: E0131 04:44:30.858719 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.867380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.867416 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.867426 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.867441 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.867454 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:30Z","lastTransitionTime":"2026-01-31T04:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.971228 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.971298 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.971315 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.971346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:30 crc kubenswrapper[4832]: I0131 04:44:30.971366 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:30Z","lastTransitionTime":"2026-01-31T04:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.075605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.075688 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.075705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.075731 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.075751 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:31Z","lastTransitionTime":"2026-01-31T04:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.179874 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.179952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.179973 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.180002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.180027 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:31Z","lastTransitionTime":"2026-01-31T04:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.283779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.283864 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.283893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.283925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.283948 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:31Z","lastTransitionTime":"2026-01-31T04:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.387694 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.387743 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.387754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.387784 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.387795 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:31Z","lastTransitionTime":"2026-01-31T04:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.490479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.490535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.490547 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.490589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.490603 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:31Z","lastTransitionTime":"2026-01-31T04:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.593610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.593687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.593715 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.593751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.593778 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:31Z","lastTransitionTime":"2026-01-31T04:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.697689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.697761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.697778 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.697807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.697826 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:31Z","lastTransitionTime":"2026-01-31T04:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.801608 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.801686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.801704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.801737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.801761 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:31Z","lastTransitionTime":"2026-01-31T04:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.835258 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:02:54.33741809 +0000 UTC Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.858708 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.858772 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.858724 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:31 crc kubenswrapper[4832]: E0131 04:44:31.858986 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:31 crc kubenswrapper[4832]: E0131 04:44:31.859103 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:31 crc kubenswrapper[4832]: E0131 04:44:31.859285 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.875304 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.899225 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.905794 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.905856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.905876 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.905900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.905918 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:31Z","lastTransitionTime":"2026-01-31T04:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.934315 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:12Z\\\",\\\"message\\\":\\\"131 04:44:12.019434 6531 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019442 6531 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0131 04:44:12.019447 6531 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0131 04:44:12.019452 6531 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019308 6531 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:44:12.019489 6531 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:44:12.019517 6531 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:44:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.958170 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:31 crc kubenswrapper[4832]: I0131 04:44:31.981233 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.001984 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:31Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.009503 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.009551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.009608 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.009633 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.009649 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:32Z","lastTransitionTime":"2026-01-31T04:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.027751 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.079591 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.093098 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.111984 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.112870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.112946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.112972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.113024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.113047 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:32Z","lastTransitionTime":"2026-01-31T04:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.131802 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.149837 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.171768 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c644ef4b-5bc9-4409-a444-17d51552531e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d5fc534e456b0e7ea58b7c97d2bfe663627b9eb62c1b84cd9b6f8b160fa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9b05a6852dfbb9f587c742d552bba3ba200137f6acbe99e84a0027b61f9140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bde28cc6a5118fb123864ccb17a4749aabb27e184261410d038beda918864f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.186770 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.207113 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2280a590a254679f2240bf2fb2aa06633b56575b436f18ea8be3bd8912598faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:23Z\\\",\\\"message\\\":\\\"2026-01-31T04:43:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526\\\\n2026-01-31T04:43:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526 to /host/opt/cni/bin/\\\\n2026-01-31T04:43:38Z [verbose] multus-daemon started\\\\n2026-01-31T04:43:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:44:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.215908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.215990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.216010 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.216042 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.216065 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:32Z","lastTransitionTime":"2026-01-31T04:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.224121 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a20a7c-a8bf-48de-a1ef-bd44f628935c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898c1f114ebe6e59fe285ebc316ae02920bdfa83d25af0c42a71146e1ffe0a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.242865 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.260563 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.283625 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:32Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.318757 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.318791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.318800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.318815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.318825 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:32Z","lastTransitionTime":"2026-01-31T04:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.421809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.421860 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.421874 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.421893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.421907 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:32Z","lastTransitionTime":"2026-01-31T04:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.525314 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.525382 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.525405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.525438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.525464 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:32Z","lastTransitionTime":"2026-01-31T04:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.629075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.629138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.629153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.629176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.629186 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:32Z","lastTransitionTime":"2026-01-31T04:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.732988 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.733372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.733478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.733603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.733732 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:32Z","lastTransitionTime":"2026-01-31T04:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.836690 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:23:09.904422772 +0000 UTC Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.837868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.838023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.838122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.838228 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.838313 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:32Z","lastTransitionTime":"2026-01-31T04:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.858423 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:32 crc kubenswrapper[4832]: E0131 04:44:32.858705 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.942447 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.942515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.942536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.942593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:32 crc kubenswrapper[4832]: I0131 04:44:32.942615 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:32Z","lastTransitionTime":"2026-01-31T04:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.047224 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.047293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.047313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.047340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.047358 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:33Z","lastTransitionTime":"2026-01-31T04:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.150872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.150942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.150964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.150991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.151009 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:33Z","lastTransitionTime":"2026-01-31T04:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.255003 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.255087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.255109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.255138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.255160 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:33Z","lastTransitionTime":"2026-01-31T04:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.359517 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.359604 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.359622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.359648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.359670 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:33Z","lastTransitionTime":"2026-01-31T04:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.463735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.463802 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.463822 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.463851 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.463870 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:33Z","lastTransitionTime":"2026-01-31T04:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.567831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.567911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.567936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.567964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.567982 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:33Z","lastTransitionTime":"2026-01-31T04:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.671245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.671288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.671305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.671324 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.671336 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:33Z","lastTransitionTime":"2026-01-31T04:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.774965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.775043 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.775063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.775101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.775128 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:33Z","lastTransitionTime":"2026-01-31T04:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.837675 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:24:20.105588527 +0000 UTC Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.858780 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:33 crc kubenswrapper[4832]: E0131 04:44:33.858978 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.859670 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.859766 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:33 crc kubenswrapper[4832]: E0131 04:44:33.859875 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:33 crc kubenswrapper[4832]: E0131 04:44:33.860122 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.879041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.879119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.879140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.879169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.879189 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:33Z","lastTransitionTime":"2026-01-31T04:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.982334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.982407 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.982434 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.982468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:33 crc kubenswrapper[4832]: I0131 04:44:33.982489 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:33Z","lastTransitionTime":"2026-01-31T04:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.085842 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.085921 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.085971 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.086003 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.086024 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:34Z","lastTransitionTime":"2026-01-31T04:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.188826 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.188864 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.188875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.188890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.188900 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:34Z","lastTransitionTime":"2026-01-31T04:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.291216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.291495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.291610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.291708 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.291769 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:34Z","lastTransitionTime":"2026-01-31T04:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.394364 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.394788 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.394923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.395113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.395258 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:34Z","lastTransitionTime":"2026-01-31T04:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.498912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.499078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.499099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.499128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.499148 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:34Z","lastTransitionTime":"2026-01-31T04:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.601015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.601061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.601077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.601102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.601118 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:34Z","lastTransitionTime":"2026-01-31T04:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.704186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.704441 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.704520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.704631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.704713 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:34Z","lastTransitionTime":"2026-01-31T04:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.808034 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.808457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.808769 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.809002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.809215 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:34Z","lastTransitionTime":"2026-01-31T04:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.839226 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 07:28:25.392746734 +0000 UTC Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.859312 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:34 crc kubenswrapper[4832]: E0131 04:44:34.860126 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.913498 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.913918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.914094 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.914296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:34 crc kubenswrapper[4832]: I0131 04:44:34.914544 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:34Z","lastTransitionTime":"2026-01-31T04:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.017679 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.017990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.018268 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.018478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.018695 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:35Z","lastTransitionTime":"2026-01-31T04:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.121665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.122083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.122274 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.122471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.122710 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:35Z","lastTransitionTime":"2026-01-31T04:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.226686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.227046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.227138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.227254 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.227335 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:35Z","lastTransitionTime":"2026-01-31T04:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.330609 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.330672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.330687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.330706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.330719 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:35Z","lastTransitionTime":"2026-01-31T04:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.433936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.433985 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.433997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.434017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.434029 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:35Z","lastTransitionTime":"2026-01-31T04:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.537206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.537285 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.537309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.537343 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.537368 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:35Z","lastTransitionTime":"2026-01-31T04:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.640930 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.641007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.641026 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.641054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.641075 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:35Z","lastTransitionTime":"2026-01-31T04:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.744448 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.744523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.744545 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.744608 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.744634 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:35Z","lastTransitionTime":"2026-01-31T04:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.841036 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 19:34:56.978736162 +0000 UTC Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.848080 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.848135 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.848154 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.848183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.848203 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:35Z","lastTransitionTime":"2026-01-31T04:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.858284 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:35 crc kubenswrapper[4832]: E0131 04:44:35.858448 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.858739 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.858851 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:35 crc kubenswrapper[4832]: E0131 04:44:35.859061 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:35 crc kubenswrapper[4832]: E0131 04:44:35.859302 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.951244 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.951300 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.951317 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.951341 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:35 crc kubenswrapper[4832]: I0131 04:44:35.951358 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:35Z","lastTransitionTime":"2026-01-31T04:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.055171 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.055211 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.055223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.055244 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.055257 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:36Z","lastTransitionTime":"2026-01-31T04:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.159171 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.159241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.159263 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.159342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.159363 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:36Z","lastTransitionTime":"2026-01-31T04:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.169599 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.169697 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:36 crc kubenswrapper[4832]: E0131 04:44:36.169783 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:40.169743342 +0000 UTC m=+149.118565067 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:44:36 crc kubenswrapper[4832]: E0131 04:44:36.169796 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:44:36 crc kubenswrapper[4832]: E0131 04:44:36.169853 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:45:40.169839915 +0000 UTC m=+149.118661600 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.170026 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:36 crc kubenswrapper[4832]: E0131 04:44:36.170098 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:44:36 crc kubenswrapper[4832]: E0131 04:44:36.170140 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:45:40.170133084 +0000 UTC m=+149.118954769 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.263183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.263288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.263313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.263349 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.263375 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:36Z","lastTransitionTime":"2026-01-31T04:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.271234 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.271307 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:36 crc kubenswrapper[4832]: E0131 04:44:36.271482 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:44:36 crc kubenswrapper[4832]: E0131 04:44:36.271535 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:44:36 crc kubenswrapper[4832]: E0131 04:44:36.271562 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:44:36 crc kubenswrapper[4832]: E0131 04:44:36.271502 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:44:36 crc kubenswrapper[4832]: E0131 04:44:36.271682 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:44:36 crc kubenswrapper[4832]: E0131 04:44:36.271702 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:44:36 crc kubenswrapper[4832]: E0131 04:44:36.271707 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:45:40.271673703 +0000 UTC m=+149.220495428 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:44:36 crc kubenswrapper[4832]: E0131 04:44:36.271771 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:45:40.271750095 +0000 UTC m=+149.220571800 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.366711 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.366750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.366760 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.366780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.366790 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:36Z","lastTransitionTime":"2026-01-31T04:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.469967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.470019 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.470034 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.470058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.470074 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:36Z","lastTransitionTime":"2026-01-31T04:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.572813 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.572855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.572866 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.572883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.572895 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:36Z","lastTransitionTime":"2026-01-31T04:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.675873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.675945 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.675964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.675988 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.676004 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:36Z","lastTransitionTime":"2026-01-31T04:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.778704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.778797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.778813 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.778833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.778847 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:36Z","lastTransitionTime":"2026-01-31T04:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.841431 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:17:15.971145868 +0000 UTC Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.859125 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:36 crc kubenswrapper[4832]: E0131 04:44:36.859544 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.881205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.881274 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.881295 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.881323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.881346 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:36Z","lastTransitionTime":"2026-01-31T04:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.985083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.985159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.985183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.985218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:36 crc kubenswrapper[4832]: I0131 04:44:36.985241 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:36Z","lastTransitionTime":"2026-01-31T04:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.088669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.088697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.088705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.088722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.088737 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:37Z","lastTransitionTime":"2026-01-31T04:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.191291 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.191326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.191334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.191348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.191356 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:37Z","lastTransitionTime":"2026-01-31T04:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.294553 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.294624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.294634 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.294652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.294661 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:37Z","lastTransitionTime":"2026-01-31T04:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.398114 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.398193 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.398218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.398251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.398276 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:37Z","lastTransitionTime":"2026-01-31T04:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.501240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.501299 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.501317 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.501346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.501364 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:37Z","lastTransitionTime":"2026-01-31T04:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.604770 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.604867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.604904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.604940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.604963 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:37Z","lastTransitionTime":"2026-01-31T04:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.708060 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.708132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.708155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.708185 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.708207 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:37Z","lastTransitionTime":"2026-01-31T04:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.811179 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.811277 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.811301 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.811333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.811359 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:37Z","lastTransitionTime":"2026-01-31T04:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.842099 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:20:47.988567274 +0000 UTC Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.859033 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.859088 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.859032 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:37 crc kubenswrapper[4832]: E0131 04:44:37.859257 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:37 crc kubenswrapper[4832]: E0131 04:44:37.859364 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:37 crc kubenswrapper[4832]: E0131 04:44:37.859658 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.913895 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.913941 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.913960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.913980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:37 crc kubenswrapper[4832]: I0131 04:44:37.913997 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:37Z","lastTransitionTime":"2026-01-31T04:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.017865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.017931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.017969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.018007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.018029 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:38Z","lastTransitionTime":"2026-01-31T04:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.121044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.121098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.121114 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.121137 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.121155 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:38Z","lastTransitionTime":"2026-01-31T04:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.223601 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.223656 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.223675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.223702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.223719 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:38Z","lastTransitionTime":"2026-01-31T04:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.327087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.327123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.327134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.327155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.327167 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:38Z","lastTransitionTime":"2026-01-31T04:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.430535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.430621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.430642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.430675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.430700 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:38Z","lastTransitionTime":"2026-01-31T04:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.534545 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.534652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.534674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.534704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.534729 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:38Z","lastTransitionTime":"2026-01-31T04:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.637858 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.637899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.637912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.637931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.637942 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:38Z","lastTransitionTime":"2026-01-31T04:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.741386 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.741444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.741462 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.741487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.741506 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:38Z","lastTransitionTime":"2026-01-31T04:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.843430 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:51:36.706877365 +0000 UTC Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.845108 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.845220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.845247 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.845288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.845432 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:38Z","lastTransitionTime":"2026-01-31T04:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.858443 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:38 crc kubenswrapper[4832]: E0131 04:44:38.858815 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.949059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.949111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.949124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.949147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:38 crc kubenswrapper[4832]: I0131 04:44:38.949160 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:38Z","lastTransitionTime":"2026-01-31T04:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.052213 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.052267 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.052281 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.052303 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.052318 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:39Z","lastTransitionTime":"2026-01-31T04:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.155480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.155632 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.155658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.155694 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.155718 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:39Z","lastTransitionTime":"2026-01-31T04:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.258763 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.258832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.258851 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.258880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.258897 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:39Z","lastTransitionTime":"2026-01-31T04:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.361552 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.362028 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.362039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.362058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.362067 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:39Z","lastTransitionTime":"2026-01-31T04:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.465471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.465538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.465556 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.465615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.465631 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:39Z","lastTransitionTime":"2026-01-31T04:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.568662 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.568698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.568708 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.568724 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.568734 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:39Z","lastTransitionTime":"2026-01-31T04:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.671845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.671901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.671911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.671926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.671935 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:39Z","lastTransitionTime":"2026-01-31T04:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.775426 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.775482 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.775494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.775511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.775520 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:39Z","lastTransitionTime":"2026-01-31T04:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.844449 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:32:40.123418197 +0000 UTC Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.859021 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.859078 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.859094 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:39 crc kubenswrapper[4832]: E0131 04:44:39.859221 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:39 crc kubenswrapper[4832]: E0131 04:44:39.859390 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:39 crc kubenswrapper[4832]: E0131 04:44:39.860095 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.860659 4832 scope.go:117] "RemoveContainer" containerID="24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.878427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.878487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.878509 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.878537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.878600 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:39Z","lastTransitionTime":"2026-01-31T04:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.982301 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.982361 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.982379 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.982406 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:39 crc kubenswrapper[4832]: I0131 04:44:39.982425 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:39Z","lastTransitionTime":"2026-01-31T04:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.085231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.085290 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.085308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.085340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.085393 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:40Z","lastTransitionTime":"2026-01-31T04:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.188421 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.188478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.188495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.188526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.188543 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:40Z","lastTransitionTime":"2026-01-31T04:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.292636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.292686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.292702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.292726 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.292740 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:40Z","lastTransitionTime":"2026-01-31T04:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.395981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.396015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.396026 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.396050 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.396061 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:40Z","lastTransitionTime":"2026-01-31T04:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.422411 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovnkube-controller/2.log" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.426577 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerStarted","Data":"789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296"} Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.427285 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.450994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.451045 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.451057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.451078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.451095 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:40Z","lastTransitionTime":"2026-01-31T04:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.460703 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:12Z\\\",\\\"message\\\":\\\"131 04:44:12.019434 6531 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019442 6531 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0131 04:44:12.019447 6531 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0131 04:44:12.019452 6531 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019308 6531 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:44:12.019489 6531 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:44:12.019517 6531 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:44:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: E0131 04:44:40.478204 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.487181 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.488716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.488773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.488785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.488838 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.488856 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:40Z","lastTransitionTime":"2026-01-31T04:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:40 crc kubenswrapper[4832]: E0131 04:44:40.502003 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.502524 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.506399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.506440 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.506452 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.506469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.506479 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:40Z","lastTransitionTime":"2026-01-31T04:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.518060 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: E0131 04:44:40.519468 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.522975 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.523022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.523036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.523055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.523065 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:40Z","lastTransitionTime":"2026-01-31T04:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.532728 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: E0131 04:44:40.537699 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.542180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.542219 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.542229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.542245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.542256 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:40Z","lastTransitionTime":"2026-01-31T04:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.544758 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: E0131 04:44:40.556533 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: E0131 04:44:40.556725 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.558150 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.558607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.558653 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.558664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.558683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.558694 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:40Z","lastTransitionTime":"2026-01-31T04:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.573830 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.584733 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.596331 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.609233 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c644ef4b-5bc9-4409-a444-17d51552531e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d5fc534e456b0e7ea58b7c97d2bfe663627b9eb62c1b84cd9b6f8b160fa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9b05a6852dfbb9f587c742d552bba3ba200137f6acbe99e84a0027b61f9140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bde28cc6a5118fb123864ccb17a4749aabb27e184261410d038beda918864f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.620841 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.636978 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2280a590a254679f2240bf2fb2aa06633b56575b436f18ea8be3bd8912598faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:23Z\\\",\\\"message\\\":\\\"2026-01-31T04:43:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526\\\\n2026-01-31T04:43:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526 to /host/opt/cni/bin/\\\\n2026-01-31T04:43:38Z [verbose] multus-daemon started\\\\n2026-01-31T04:43:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:44:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.650973 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a20a7c-a8bf-48de-a1ef-bd44f628935c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898c1f114ebe6e59fe285ebc316ae02920bdfa83d25af0c42a71146e1ffe0a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.661707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.661789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.661815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.661855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.661882 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:40Z","lastTransitionTime":"2026-01-31T04:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.677861 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.697977 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.723224 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.736211 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.747041 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:40Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.765555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.765618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.765630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.765647 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.765709 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:40Z","lastTransitionTime":"2026-01-31T04:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.845299 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:44:47.638041556 +0000 UTC Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.858677 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:40 crc kubenswrapper[4832]: E0131 04:44:40.858834 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.868664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.868701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.868713 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.868728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.868740 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:40Z","lastTransitionTime":"2026-01-31T04:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.971222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.971263 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.971274 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.971291 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:40 crc kubenswrapper[4832]: I0131 04:44:40.971301 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:40Z","lastTransitionTime":"2026-01-31T04:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.073415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.073459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.073472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.073491 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.073504 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:41Z","lastTransitionTime":"2026-01-31T04:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.176801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.176868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.176881 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.176901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.176915 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:41Z","lastTransitionTime":"2026-01-31T04:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.280807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.280877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.280896 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.280923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.280942 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:41Z","lastTransitionTime":"2026-01-31T04:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.383743 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.383809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.383823 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.383844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.383860 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:41Z","lastTransitionTime":"2026-01-31T04:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.432529 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovnkube-controller/3.log" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.433354 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovnkube-controller/2.log" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.436143 4832 generic.go:334] "Generic (PLEG): container finished" podID="e089fa33-e032-4755-8b7e-262adfecc82f" containerID="789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296" exitCode=1 Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.436193 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerDied","Data":"789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296"} Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.436247 4832 scope.go:117] "RemoveContainer" containerID="24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.444043 4832 scope.go:117] "RemoveContainer" containerID="789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296" Jan 31 04:44:41 crc kubenswrapper[4832]: E0131 04:44:41.444307 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.455469 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a20a7c-a8bf-48de-a1ef-bd44f628935c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898c1f114ebe6e59fe285ebc316ae02920bdfa83d25af0c42a71146e1ffe0a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.473196 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.487480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.487541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.487586 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.487610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.487625 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:41Z","lastTransitionTime":"2026-01-31T04:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.489856 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.501834 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c644ef4b-5bc9-4409-a444-17d51552531e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d5fc534e456b0e7ea58b7c97d2bfe663627b9eb62c1b84cd9b6f8b160fa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9b05a6852dfbb9f587c742d552bba3ba200137f6acbe99e84a0027b61f9140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bde28cc6a5118fb123864ccb17a4749aabb27e184261410d038beda918864f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.513151 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.525750 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2280a590a254679f2240bf2fb2aa06633b56575b436f18ea8be3bd8912598faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:23Z\\\",\\\"message\\\":\\\"2026-01-31T04:43:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526\\\\n2026-01-31T04:43:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526 to /host/opt/cni/bin/\\\\n2026-01-31T04:43:38Z [verbose] multus-daemon started\\\\n2026-01-31T04:43:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:44:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.546377 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.559685 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.571993 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.583970 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.590021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.590050 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.590058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.590075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.590086 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:41Z","lastTransitionTime":"2026-01-31T04:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.595658 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.611069 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.631365 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:12Z\\\",\\\"message\\\":\\\"131 04:44:12.019434 6531 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019442 6531 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0131 04:44:12.019447 6531 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0131 04:44:12.019452 6531 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019308 6531 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:44:12.019489 6531 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:44:12.019517 6531 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:44:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"try object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0131 04:44:40.908100 6996 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0131 04:44:40.908115 6996 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0131 04:44:40.908121 6996 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0131 04:44:40.908137 6996 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0131 04:44:40.908148 6996 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.645613 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.656729 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.667520 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.678950 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.689080 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.692924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.692962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.692973 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.692994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.693008 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:41Z","lastTransitionTime":"2026-01-31T04:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.700799 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.796152 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.796188 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.796204 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.796223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.796236 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:41Z","lastTransitionTime":"2026-01-31T04:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.845718 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:41:05.874929565 +0000 UTC Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.859218 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.859313 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:41 crc kubenswrapper[4832]: E0131 04:44:41.859418 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.859481 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:41 crc kubenswrapper[4832]: E0131 04:44:41.859725 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:41 crc kubenswrapper[4832]: E0131 04:44:41.859923 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.877927 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.893979 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.898805 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.898834 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.898843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.898862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.898872 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:41Z","lastTransitionTime":"2026-01-31T04:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.911822 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.930829 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f1f59d8ac262505858128d551382fa8a54eccb630dfbc5065ef8e833cf8aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:12Z\\\",\\\"message\\\":\\\"131 04:44:12.019434 6531 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019442 6531 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0131 04:44:12.019447 6531 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0131 04:44:12.019452 6531 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0131 04:44:12.019308 6531 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 04:44:12.019489 6531 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0131 04:44:12.019517 6531 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:44:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"try object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0131 04:44:40.908100 6996 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0131 04:44:40.908115 6996 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0131 04:44:40.908121 6996 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0131 04:44:40.908137 6996 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0131 04:44:40.908148 6996 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.941827 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.954326 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.966146 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.977113 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:41 crc kubenswrapper[4832]: I0131 04:44:41.991499 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:41Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.000279 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.000320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.000331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.000351 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.000364 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:42Z","lastTransitionTime":"2026-01-31T04:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.009268 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.025206 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2280a590a254679f2240bf2fb2aa06633b56575b436f18ea8be3bd8912598faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:23Z\\\",\\\"message\\\":\\\"2026-01-31T04:43:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526\\\\n2026-01-31T04:43:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526 to /host/opt/cni/bin/\\\\n2026-01-31T04:43:38Z [verbose] multus-daemon started\\\\n2026-01-31T04:43:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:44:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.036807 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a20a7c-a8bf-48de-a1ef-bd44f628935c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898c1f114ebe6e59fe285ebc316ae02920bdfa83d25af0c42a71146e1ffe0a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.056709 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.075976 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.093829 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c644ef4b-5bc9-4409-a444-17d51552531e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d5fc534e456b0e7ea58b7c97d2bfe663627b9eb62c1b84cd9b6f8b160fa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9b05a6852dfbb9f587c742d552bba3ba200137f6acbe99e84a0027b61f9140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bde28cc6a5118fb123864ccb17a4749aabb27e184261410d038beda918864f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.103538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.103618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.103646 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.103680 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.103699 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:42Z","lastTransitionTime":"2026-01-31T04:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.109520 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.135153 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.149728 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.165036 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.207159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.207202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.207215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.207235 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.207248 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:42Z","lastTransitionTime":"2026-01-31T04:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.312156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.312205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.312216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.312236 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.312249 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:42Z","lastTransitionTime":"2026-01-31T04:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.414908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.415005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.415021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.415046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.415062 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:42Z","lastTransitionTime":"2026-01-31T04:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.443388 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovnkube-controller/3.log" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.448753 4832 scope.go:117] "RemoveContainer" containerID="789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296" Jan 31 04:44:42 crc kubenswrapper[4832]: E0131 04:44:42.449121 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.470534 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.486014 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.509716 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.517913 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.517950 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.517959 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.517975 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.517986 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:42Z","lastTransitionTime":"2026-01-31T04:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.535239 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"try object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0131 04:44:40.908100 6996 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0131 04:44:40.908115 6996 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0131 04:44:40.908121 6996 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0131 04:44:40.908137 6996 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0131 04:44:40.908148 6996 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:44:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.554427 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.576747 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.594803 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.609669 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.620787 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.620833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.620850 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.620872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.620886 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:42Z","lastTransitionTime":"2026-01-31T04:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.629439 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.643867 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.665999 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2280a590a254679f2240bf2fb2aa06633b56575b436f18ea8be3bd8912598faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:23Z\\\",\\\"message\\\":\\\"2026-01-31T04:43:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526\\\\n2026-01-31T04:43:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526 to /host/opt/cni/bin/\\\\n2026-01-31T04:43:38Z [verbose] multus-daemon started\\\\n2026-01-31T04:43:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:44:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.683099 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a20a7c-a8bf-48de-a1ef-bd44f628935c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898c1f114ebe6e59fe285ebc316ae02920bdfa83d25af0c42a71146e1ffe0a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.705094 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.723811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.723855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.723869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.723889 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.723902 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:42Z","lastTransitionTime":"2026-01-31T04:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.725842 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.744035 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c644ef4b-5bc9-4409-a444-17d51552531e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d5fc534e456b0e7ea58b7c97d2bfe663627b9eb62c1b84cd9b6f8b160fa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9b05a6852dfbb9f587c742d552bba3ba200137f6acbe99e84a0027b61f9140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bde28cc6a5118fb123864ccb17a4749aabb27e184261410d038beda918864f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.762025 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.791977 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.809146 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.824000 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:42Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.826408 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.826467 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.826486 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.826519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.826542 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:42Z","lastTransitionTime":"2026-01-31T04:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.845969 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:00:34.889581949 +0000 UTC Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.858956 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:42 crc kubenswrapper[4832]: E0131 04:44:42.859170 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.929753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.929839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.929879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.929916 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:42 crc kubenswrapper[4832]: I0131 04:44:42.929943 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:42Z","lastTransitionTime":"2026-01-31T04:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.032590 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.032710 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.032739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.032779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.032820 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:43Z","lastTransitionTime":"2026-01-31T04:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.136934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.137008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.137027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.137053 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.137072 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:43Z","lastTransitionTime":"2026-01-31T04:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.240476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.240594 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.240612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.240640 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.240660 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:43Z","lastTransitionTime":"2026-01-31T04:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.345081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.345226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.345299 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.345328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.345385 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:43Z","lastTransitionTime":"2026-01-31T04:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.448720 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.448815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.448843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.448882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.448947 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:43Z","lastTransitionTime":"2026-01-31T04:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.552823 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.552931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.552944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.552986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.553001 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:43Z","lastTransitionTime":"2026-01-31T04:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.655497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.655552 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.655584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.655603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.655618 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:43Z","lastTransitionTime":"2026-01-31T04:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.758006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.758052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.758060 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.758077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.758090 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:43Z","lastTransitionTime":"2026-01-31T04:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.846418 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:47:27.608962762 +0000 UTC Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.858807 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.858807 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.858827 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:43 crc kubenswrapper[4832]: E0131 04:44:43.859062 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:43 crc kubenswrapper[4832]: E0131 04:44:43.859164 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:43 crc kubenswrapper[4832]: E0131 04:44:43.859281 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.861824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.861866 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.861879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.861898 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.861916 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:43Z","lastTransitionTime":"2026-01-31T04:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.964269 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.964298 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.964306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.964321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:43 crc kubenswrapper[4832]: I0131 04:44:43.964332 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:43Z","lastTransitionTime":"2026-01-31T04:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.067656 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.067747 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.067769 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.067797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.067818 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:44Z","lastTransitionTime":"2026-01-31T04:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.172001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.172033 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.172046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.172063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.172074 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:44Z","lastTransitionTime":"2026-01-31T04:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.274891 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.274934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.274946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.274963 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.274972 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:44Z","lastTransitionTime":"2026-01-31T04:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.378761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.378823 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.378840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.378865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.378885 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:44Z","lastTransitionTime":"2026-01-31T04:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.481775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.481825 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.481838 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.481859 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.481873 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:44Z","lastTransitionTime":"2026-01-31T04:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.584477 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.584587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.584615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.584647 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.584669 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:44Z","lastTransitionTime":"2026-01-31T04:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.687639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.687706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.687725 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.687753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.687773 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:44Z","lastTransitionTime":"2026-01-31T04:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.790874 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.790932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.790950 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.790976 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.790994 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:44Z","lastTransitionTime":"2026-01-31T04:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.846755 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:43:27.977128042 +0000 UTC Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.859192 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:44 crc kubenswrapper[4832]: E0131 04:44:44.859387 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.893257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.893311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.893329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.893360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.893377 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:44Z","lastTransitionTime":"2026-01-31T04:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.996214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.996274 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.996292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.996316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:44 crc kubenswrapper[4832]: I0131 04:44:44.996334 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:44Z","lastTransitionTime":"2026-01-31T04:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.100337 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.100439 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.100463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.100496 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.100526 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:45Z","lastTransitionTime":"2026-01-31T04:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.203951 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.204015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.204033 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.204061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.204080 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:45Z","lastTransitionTime":"2026-01-31T04:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.306625 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.306668 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.306683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.306701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.306714 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:45Z","lastTransitionTime":"2026-01-31T04:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.409182 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.409228 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.409241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.409260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.409274 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:45Z","lastTransitionTime":"2026-01-31T04:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.511489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.511533 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.511541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.511556 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.511578 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:45Z","lastTransitionTime":"2026-01-31T04:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.614615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.614666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.614676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.614693 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.614704 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:45Z","lastTransitionTime":"2026-01-31T04:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.717301 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.717360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.717377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.717437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.717461 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:45Z","lastTransitionTime":"2026-01-31T04:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.820634 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.820675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.820684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.820701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.820712 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:45Z","lastTransitionTime":"2026-01-31T04:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.847695 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:18:35.712372388 +0000 UTC Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.859026 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:45 crc kubenswrapper[4832]: E0131 04:44:45.859170 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.859242 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.859026 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:45 crc kubenswrapper[4832]: E0131 04:44:45.859327 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:45 crc kubenswrapper[4832]: E0131 04:44:45.859540 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.923758 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.924090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.924149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.924215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:45 crc kubenswrapper[4832]: I0131 04:44:45.924279 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:45Z","lastTransitionTime":"2026-01-31T04:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.026269 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.026669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.026743 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.026808 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.026984 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:46Z","lastTransitionTime":"2026-01-31T04:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.130691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.130739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.130755 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.130777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.130791 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:46Z","lastTransitionTime":"2026-01-31T04:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.233648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.233988 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.234121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.234262 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.234390 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:46Z","lastTransitionTime":"2026-01-31T04:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.337797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.337854 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.337871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.337901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.337920 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:46Z","lastTransitionTime":"2026-01-31T04:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.440519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.440603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.440621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.440648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.440666 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:46Z","lastTransitionTime":"2026-01-31T04:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.544331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.544401 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.544419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.544454 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.544472 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:46Z","lastTransitionTime":"2026-01-31T04:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.648786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.649088 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.649153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.649242 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.649305 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:46Z","lastTransitionTime":"2026-01-31T04:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.752494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.752607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.752626 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.752661 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.752680 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:46Z","lastTransitionTime":"2026-01-31T04:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.848235 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 16:03:11.419093781 +0000 UTC Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.856182 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.856344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.856408 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.856474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.856539 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:46Z","lastTransitionTime":"2026-01-31T04:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.858800 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:46 crc kubenswrapper[4832]: E0131 04:44:46.859303 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.960994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.961059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.961079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.961109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:46 crc kubenswrapper[4832]: I0131 04:44:46.961128 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:46Z","lastTransitionTime":"2026-01-31T04:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.065091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.065163 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.065181 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.065209 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.065230 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:47Z","lastTransitionTime":"2026-01-31T04:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.172694 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.172751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.172767 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.172792 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.172812 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:47Z","lastTransitionTime":"2026-01-31T04:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.275335 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.275371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.275385 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.275404 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.275417 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:47Z","lastTransitionTime":"2026-01-31T04:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.379547 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.379868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.379974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.380076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.380205 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:47Z","lastTransitionTime":"2026-01-31T04:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.483149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.483232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.483262 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.483282 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.483293 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:47Z","lastTransitionTime":"2026-01-31T04:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.586175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.586494 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.586700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.586920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.587113 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:47Z","lastTransitionTime":"2026-01-31T04:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.689906 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.689948 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.689965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.689992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.690011 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:47Z","lastTransitionTime":"2026-01-31T04:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.793690 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.793762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.793780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.793811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.793830 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:47Z","lastTransitionTime":"2026-01-31T04:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.849037 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:22:12.497277089 +0000 UTC Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.858399 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:47 crc kubenswrapper[4832]: E0131 04:44:47.858652 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.858667 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.858705 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:47 crc kubenswrapper[4832]: E0131 04:44:47.859217 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:47 crc kubenswrapper[4832]: E0131 04:44:47.859539 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.896669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.896720 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.896738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.896763 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:47 crc kubenswrapper[4832]: I0131 04:44:47.896784 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:47Z","lastTransitionTime":"2026-01-31T04:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.000458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.000584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.000603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.000634 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.000716 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:48Z","lastTransitionTime":"2026-01-31T04:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.104419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.104884 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.105117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.105282 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.105414 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:48Z","lastTransitionTime":"2026-01-31T04:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.209342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.209499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.209522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.209555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.209604 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:48Z","lastTransitionTime":"2026-01-31T04:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.312808 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.312882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.312902 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.312932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.312951 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:48Z","lastTransitionTime":"2026-01-31T04:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.420479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.420531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.420558 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.420603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.420619 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:48Z","lastTransitionTime":"2026-01-31T04:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.524107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.524164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.524180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.524201 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.524216 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:48Z","lastTransitionTime":"2026-01-31T04:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.628115 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.628178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.628198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.628223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.628243 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:48Z","lastTransitionTime":"2026-01-31T04:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.731304 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.731375 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.731399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.731431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.731454 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:48Z","lastTransitionTime":"2026-01-31T04:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.834834 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.834930 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.835008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.835040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.835061 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:48Z","lastTransitionTime":"2026-01-31T04:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.851792 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 01:21:14.361145674 +0000 UTC Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.858871 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:48 crc kubenswrapper[4832]: E0131 04:44:48.859122 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.939188 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.939601 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.939620 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.939645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:48 crc kubenswrapper[4832]: I0131 04:44:48.939689 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:48Z","lastTransitionTime":"2026-01-31T04:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.042938 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.043000 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.043016 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.043039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.043057 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:49Z","lastTransitionTime":"2026-01-31T04:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.146331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.146396 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.146413 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.146444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.146465 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:49Z","lastTransitionTime":"2026-01-31T04:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.250234 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.250305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.250328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.250359 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.250381 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:49Z","lastTransitionTime":"2026-01-31T04:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.354157 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.354201 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.354218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.354246 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.354263 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:49Z","lastTransitionTime":"2026-01-31T04:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.457341 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.457390 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.457409 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.457433 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.457450 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:49Z","lastTransitionTime":"2026-01-31T04:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.560665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.560757 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.560775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.560805 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.560823 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:49Z","lastTransitionTime":"2026-01-31T04:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.664184 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.664250 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.664268 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.664298 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.664318 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:49Z","lastTransitionTime":"2026-01-31T04:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.767351 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.767410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.767427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.767457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.767474 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:49Z","lastTransitionTime":"2026-01-31T04:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.852330 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 08:34:37.364284845 +0000 UTC Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.858766 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.858844 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:49 crc kubenswrapper[4832]: E0131 04:44:49.858964 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.859231 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:49 crc kubenswrapper[4832]: E0131 04:44:49.859341 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:49 crc kubenswrapper[4832]: E0131 04:44:49.859603 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.869434 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.869502 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.869524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.869551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.869605 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:49Z","lastTransitionTime":"2026-01-31T04:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.972239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.972316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.972333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.972357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:49 crc kubenswrapper[4832]: I0131 04:44:49.972374 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:49Z","lastTransitionTime":"2026-01-31T04:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.075876 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.075964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.075988 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.076020 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.076046 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:50Z","lastTransitionTime":"2026-01-31T04:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.178483 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.178526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.178536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.178552 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.178575 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:50Z","lastTransitionTime":"2026-01-31T04:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.281634 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.281689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.281704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.281723 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.281737 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:50Z","lastTransitionTime":"2026-01-31T04:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.384389 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.384441 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.384452 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.384472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.384491 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:50Z","lastTransitionTime":"2026-01-31T04:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.486964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.487002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.487012 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.487030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.487043 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:50Z","lastTransitionTime":"2026-01-31T04:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.589814 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.589888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.589914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.589950 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.589974 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:50Z","lastTransitionTime":"2026-01-31T04:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.644692 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.644753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.644773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.644800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.644971 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:50Z","lastTransitionTime":"2026-01-31T04:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:50 crc kubenswrapper[4832]: E0131 04:44:50.671507 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.676480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.676538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.676582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.676608 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.676624 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:50Z","lastTransitionTime":"2026-01-31T04:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:50 crc kubenswrapper[4832]: E0131 04:44:50.695686 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.699390 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.699453 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.699474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.699502 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.699526 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:50Z","lastTransitionTime":"2026-01-31T04:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:50 crc kubenswrapper[4832]: E0131 04:44:50.719697 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.723652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.723691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.723701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.723719 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.723731 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:50Z","lastTransitionTime":"2026-01-31T04:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:50 crc kubenswrapper[4832]: E0131 04:44:50.735667 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.739871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.739960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.740013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.740038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.740086 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:50Z","lastTransitionTime":"2026-01-31T04:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:50 crc kubenswrapper[4832]: E0131 04:44:50.756326 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:50Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:50 crc kubenswrapper[4832]: E0131 04:44:50.756488 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.758483 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.758512 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.758524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.758544 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.758578 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:50Z","lastTransitionTime":"2026-01-31T04:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.852664 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:10:22.448412255 +0000 UTC Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.859085 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:50 crc kubenswrapper[4832]: E0131 04:44:50.859262 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.860526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.860616 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.860635 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.860658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.860675 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:50Z","lastTransitionTime":"2026-01-31T04:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.962854 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.962898 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.962908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.962925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:50 crc kubenswrapper[4832]: I0131 04:44:50.962936 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:50Z","lastTransitionTime":"2026-01-31T04:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.065291 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.065335 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.065344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.065358 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.065367 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:51Z","lastTransitionTime":"2026-01-31T04:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.167706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.167760 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.167777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.167798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.167809 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:51Z","lastTransitionTime":"2026-01-31T04:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.270097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.270173 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.270191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.270216 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.270234 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:51Z","lastTransitionTime":"2026-01-31T04:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.372760 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.372818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.372832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.372853 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.372871 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:51Z","lastTransitionTime":"2026-01-31T04:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.476090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.476150 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.476169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.476191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.476204 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:51Z","lastTransitionTime":"2026-01-31T04:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.578906 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.578972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.578988 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.579017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.579035 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:51Z","lastTransitionTime":"2026-01-31T04:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.681712 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.681767 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.681780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.681803 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.681817 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:51Z","lastTransitionTime":"2026-01-31T04:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.785247 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.785347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.785373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.785410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.785467 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:51Z","lastTransitionTime":"2026-01-31T04:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.853704 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:15:35.718700717 +0000 UTC Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.859107 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.859223 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:51 crc kubenswrapper[4832]: E0131 04:44:51.859359 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.859407 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:51 crc kubenswrapper[4832]: E0131 04:44:51.859504 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:51 crc kubenswrapper[4832]: E0131 04:44:51.859642 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.886748 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52da9bf3-8e24-4cf8-a584-cb6c564d1130\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bccefeb4cd47dc6762cc79f539f4c1dbd4f08b361fe447a304682345efdce0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231c66eedb35c4077dbe68542d90f44f1f40a7369904679c06efeb2448c56e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://630beec828a8a866d0325485ef49022c6c564a93a8bc611c021c5e6209b84e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1526602e1239f0ce44d578e2d23c3bdd9408dca8b0491cfb6337d90069b79a00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b424f3b482cb813eac84da7ce510b643d17498288d41b1bcf7c28314030f7fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe467eeec38fbb5e3a0e14431f6728dfbce32aad6315fa80c5ec338fd388878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6c01a7f632a979f35894558c5030e4018af998d07ecd1b44ca596fd5ad5dbb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12599c7e4896d7ed2a778443e94d645146d5f491ff967837c4c87c8242072196\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.889646 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.889742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.889774 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.889818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.889854 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:51Z","lastTransitionTime":"2026-01-31T04:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.899690 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.916056 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.934319 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.949321 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.971710 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-899xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ea025bd-5921-4529-887b-d627fa8e245e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a48b835cf41c6da8bbe89981419a4909b20c5963aae3348db317839c58bbaf13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8653c6d0fc0e2ff988663a55e486aeefd8338fea9ab6fa4fb00384f292c524f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5136a4740905c1b522b10335706c2197ceae5abf61c0f5be7f777b841221334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5047d1c4b27c61168e39f52e2614a1da86b6050596be0fa9088a93ccd7d3a2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5b6dbb488046dbfa80bce7cd1395105efd30222c39e5c8ba8efbba99f7d8eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdfc47a6f985e10e68e7a7f5d7547ecff7993033ad1391f9b8d24af66d32bec1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a08933b22203aa1db6c5272ee44670d5121eb737de7e4c3a5a129c7091480c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vg6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-899xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.993447 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e089fa33-e032-4755-8b7e-262adfecc82f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:40Z\\\",\\\"message\\\":\\\"try object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0131 04:44:40.908100 6996 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0131 04:44:40.908115 6996 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0131 04:44:40.908121 6996 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0131 04:44:40.908137 6996 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF0131 04:44:40.908148 6996 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:44:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sb97j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7gvmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:51Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.994353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.994449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.994471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.994528 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:51 crc kubenswrapper[4832]: I0131 04:44:51.994549 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:51Z","lastTransitionTime":"2026-01-31T04:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.010084 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c5f0a80-5a4f-4583-88d0-5e504d87d00a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bba437fdaf2685ae66f639a13c7b1d2bde4751f9dba2f1ff7e4cebffc76777be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5h2bq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bw458\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.024404 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70b23715-2e5a-45f7-9e0a-093c15037d3a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2380d182440a002ab4b9e6f73c24237bbafc58d97ae3125fd82a18d3a93aefa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16246c8f10f446746607bff347949336abc12b7180b8d7ff90a87c1195a1f123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hqnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rxzd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.045815 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06ee414fbc62d473bf5ff3bb0848f1102ed5799882c15becb93853d797a95c83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.066460 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7891232840be921a18c0efa7f5dc7a08f7ff8b3eeb667850961bb7221d3e7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.086380 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.097652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.097716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.097737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.097764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.097782 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:52Z","lastTransitionTime":"2026-01-31T04:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.103173 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9d7aad9ad8109a959c0e2a78a2c21cba35589a6dfc899bf0d6ce94302b00d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb617a8bd006a97d23e304007a8d23cfa581de1aeaeaea4ff4befcbef2b3b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.117720 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qk99s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c35251d7-6c14-4d3b-94d9-afa0287c2894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23346d02ca08c63b66ab06a4c76df80f99cbca6f78f5239dc82a031d6224a77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dknt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qk99s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.139774 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-frk6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df4dafae-fa72-4f03-8531-93538336b0cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2280a590a254679f2240bf2fb2aa06633b56575b436f18ea8be3bd8912598faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T04:44:23Z\\\",\\\"message\\\":\\\"2026-01-31T04:43:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526\\\\n2026-01-31T04:43:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ac68f4fd-9d79-440c-a6a2-5afc59bac526 to /host/opt/cni/bin/\\\\n2026-01-31T04:43:38Z [verbose] multus-daemon started\\\\n2026-01-31T04:43:38Z [verbose] Readiness Indicator file check\\\\n2026-01-31T04:44:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:44:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-494kp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-frk6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.157879 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a20a7c-a8bf-48de-a1ef-bd44f628935c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://898c1f114ebe6e59fe285ebc316ae02920bdfa83d25af0c42a71146e1ffe0a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba61a47f9df1296bccbd587d3906473536491c8afd4916282227a6edc0e4be85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.167969 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs\") pod \"network-metrics-daemon-rbg9h\" (UID: \"88205cd8-6bbf-40af-a0d1-bfae431d97e7\") " pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:52 crc kubenswrapper[4832]: E0131 04:44:52.168156 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:44:52 crc kubenswrapper[4832]: E0131 04:44:52.168461 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs podName:88205cd8-6bbf-40af-a0d1-bfae431d97e7 nodeName:}" failed. No retries permitted until 2026-01-31 04:45:56.168387529 +0000 UTC m=+165.117209204 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs") pod "network-metrics-daemon-rbg9h" (UID: "88205cd8-6bbf-40af-a0d1-bfae431d97e7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.179821 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"285111dc-cc04-4ea2-837a-ae8ca5028ee3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T04:43:26Z\\\",\\\"message\\\":\\\"W0131 04:43:15.020548 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0131 04:43:15.021281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769834595 cert, and key in /tmp/serving-cert-884860058/serving-signer.crt, /tmp/serving-cert-884860058/serving-signer.key\\\\nI0131 04:43:15.317014 1 observer_polling.go:159] Starting file observer\\\\nW0131 04:43:15.325679 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0131 04:43:15.325913 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 04:43:15.327534 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-884860058/tls.crt::/tmp/serving-cert-884860058/tls.key\\\\\\\"\\\\nF0131 04:43:25.993205 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.201337 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7108fda7-eed3-4f67-875e-b79be53024ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ea04ee347ef470f6079873eef5d02249eedac7a9e2238eb8ace74f6ad5990f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8823f5699772658381d74f052ced76637938fde86f1c4009db4364d36676a771\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21e2d0202fa28c6862e4fdbe2db516064cbcb66ce84d845d6c14e0f8bccdfa84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.202229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.202295 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.202318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.202346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.202366 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:52Z","lastTransitionTime":"2026-01-31T04:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.218481 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c644ef4b-5bc9-4409-a444-17d51552531e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://125d5fc534e456b0e7ea58b7c97d2bfe663627b9eb62c1b84cd9b6f8b160fa43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd9b05a6852dfbb9f587c742d552bba3ba200137f6acbe99e84a0027b61f9140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bde28cc6a5118fb123864ccb17a4749aabb27e184261410d038beda918864f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://423e8d10a6bafb559eed35a41b124dff3eb1d6b3f7703c60f6862b299a02e71c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T04:43:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T04:43:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:44:52Z is after 2025-08-24T17:21:41Z" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.305456 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.305526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.305544 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.305601 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.305623 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:52Z","lastTransitionTime":"2026-01-31T04:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.408981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.409063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.409090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.409124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.409146 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:52Z","lastTransitionTime":"2026-01-31T04:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.512380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.512436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.512460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.512490 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.512510 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:52Z","lastTransitionTime":"2026-01-31T04:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.615611 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.615668 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.615683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.615706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.615723 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:52Z","lastTransitionTime":"2026-01-31T04:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.718918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.719004 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.719039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.719075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.719105 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:52Z","lastTransitionTime":"2026-01-31T04:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.822546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.822634 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.822651 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.822676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.822697 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:52Z","lastTransitionTime":"2026-01-31T04:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.854426 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 16:22:20.150278046 +0000 UTC Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.858768 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:52 crc kubenswrapper[4832]: E0131 04:44:52.858944 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.926759 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.926826 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.926843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.926878 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:52 crc kubenswrapper[4832]: I0131 04:44:52.926898 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:52Z","lastTransitionTime":"2026-01-31T04:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.030065 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.030133 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.030156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.030188 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.030211 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:53Z","lastTransitionTime":"2026-01-31T04:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.134191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.134262 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.134285 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.134314 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.134338 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:53Z","lastTransitionTime":"2026-01-31T04:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.237739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.237800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.237817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.237843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.237861 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:53Z","lastTransitionTime":"2026-01-31T04:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.341242 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.341307 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.341325 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.341358 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.341379 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:53Z","lastTransitionTime":"2026-01-31T04:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.445056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.445113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.445132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.445153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.445169 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:53Z","lastTransitionTime":"2026-01-31T04:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.547669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.547722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.547743 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.547764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.547777 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:53Z","lastTransitionTime":"2026-01-31T04:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.651605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.651672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.651684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.651703 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.651717 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:53Z","lastTransitionTime":"2026-01-31T04:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.754832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.754891 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.754908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.754929 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.754941 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:53Z","lastTransitionTime":"2026-01-31T04:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.854890 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 15:35:34.349328321 +0000 UTC Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.858156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.858199 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.858212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.858230 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.858243 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:53Z","lastTransitionTime":"2026-01-31T04:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.858492 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.858624 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.858797 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:53 crc kubenswrapper[4832]: E0131 04:44:53.858786 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:53 crc kubenswrapper[4832]: E0131 04:44:53.859129 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:53 crc kubenswrapper[4832]: E0131 04:44:53.859263 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.859474 4832 scope.go:117] "RemoveContainer" containerID="789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296" Jan 31 04:44:53 crc kubenswrapper[4832]: E0131 04:44:53.860372 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.960968 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.961008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.961018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.961034 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:53 crc kubenswrapper[4832]: I0131 04:44:53.961046 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:53Z","lastTransitionTime":"2026-01-31T04:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.063647 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.063685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.063695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.063710 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.063720 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:54Z","lastTransitionTime":"2026-01-31T04:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.166782 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.166849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.166874 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.166905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.166928 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:54Z","lastTransitionTime":"2026-01-31T04:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.269184 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.269226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.269243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.269261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.269272 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:54Z","lastTransitionTime":"2026-01-31T04:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.372353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.372381 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.372389 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.372405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.372416 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:54Z","lastTransitionTime":"2026-01-31T04:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.475190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.475222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.475233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.475250 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.475259 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:54Z","lastTransitionTime":"2026-01-31T04:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.578186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.578232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.578240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.578255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.578269 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:54Z","lastTransitionTime":"2026-01-31T04:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.680685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.680758 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.680781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.680809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.680826 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:54Z","lastTransitionTime":"2026-01-31T04:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.783155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.783198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.783208 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.783227 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.783239 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:54Z","lastTransitionTime":"2026-01-31T04:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.855372 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 19:14:03.066459036 +0000 UTC Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.858706 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:54 crc kubenswrapper[4832]: E0131 04:44:54.858846 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.886144 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.886305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.886332 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.886361 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.886382 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:54Z","lastTransitionTime":"2026-01-31T04:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.989383 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.989461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.989479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.989508 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:54 crc kubenswrapper[4832]: I0131 04:44:54.989531 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:54Z","lastTransitionTime":"2026-01-31T04:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.092801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.092862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.092878 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.092906 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.092924 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:55Z","lastTransitionTime":"2026-01-31T04:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.196606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.196679 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.196703 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.196735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.196757 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:55Z","lastTransitionTime":"2026-01-31T04:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.299127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.299241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.299266 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.299293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.299310 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:55Z","lastTransitionTime":"2026-01-31T04:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.401501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.401555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.401596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.401615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.401627 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:55Z","lastTransitionTime":"2026-01-31T04:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.503379 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.503412 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.503468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.503484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.503495 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:55Z","lastTransitionTime":"2026-01-31T04:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.606808 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.606872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.606890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.606914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.606936 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:55Z","lastTransitionTime":"2026-01-31T04:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.709191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.709255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.709276 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.709305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.709325 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:55Z","lastTransitionTime":"2026-01-31T04:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.812529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.812592 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.812603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.812621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.812633 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:55Z","lastTransitionTime":"2026-01-31T04:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.856465 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:08:22.595002345 +0000 UTC Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.858994 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.859130 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:55 crc kubenswrapper[4832]: E0131 04:44:55.859200 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.859035 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:55 crc kubenswrapper[4832]: E0131 04:44:55.859455 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:55 crc kubenswrapper[4832]: E0131 04:44:55.859545 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.914900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.914972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.914998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.915027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:55 crc kubenswrapper[4832]: I0131 04:44:55.915049 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:55Z","lastTransitionTime":"2026-01-31T04:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.017270 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.017337 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.017361 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.017389 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.017413 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:56Z","lastTransitionTime":"2026-01-31T04:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.120880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.120964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.120981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.121006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.121023 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:56Z","lastTransitionTime":"2026-01-31T04:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.223143 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.223198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.223215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.223241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.223260 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:56Z","lastTransitionTime":"2026-01-31T04:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.326529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.326638 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.326657 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.326682 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.326699 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:56Z","lastTransitionTime":"2026-01-31T04:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.430444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.430492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.430504 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.430524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.430536 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:56Z","lastTransitionTime":"2026-01-31T04:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.532529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.532577 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.532588 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.532604 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.532613 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:56Z","lastTransitionTime":"2026-01-31T04:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.634773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.634823 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.634831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.634847 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.634857 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:56Z","lastTransitionTime":"2026-01-31T04:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.738044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.738091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.738113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.738137 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.738149 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:56Z","lastTransitionTime":"2026-01-31T04:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.841139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.841182 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.841193 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.841212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.841224 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:56Z","lastTransitionTime":"2026-01-31T04:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.857552 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 21:53:32.753533265 +0000 UTC Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.858761 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:56 crc kubenswrapper[4832]: E0131 04:44:56.858900 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.943727 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.943792 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.943808 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.943828 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:56 crc kubenswrapper[4832]: I0131 04:44:56.943859 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:56Z","lastTransitionTime":"2026-01-31T04:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.046541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.046595 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.046608 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.046627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.046639 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:57Z","lastTransitionTime":"2026-01-31T04:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.149278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.149339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.149356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.149385 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.149403 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:57Z","lastTransitionTime":"2026-01-31T04:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.251623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.251691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.251715 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.251744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.251765 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:57Z","lastTransitionTime":"2026-01-31T04:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.355180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.355222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.355233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.355251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.355263 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:57Z","lastTransitionTime":"2026-01-31T04:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.458768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.458812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.458822 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.458841 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.458852 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:57Z","lastTransitionTime":"2026-01-31T04:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.561346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.561439 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.561457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.561482 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.561499 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:57Z","lastTransitionTime":"2026-01-31T04:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.664395 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.664441 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.664455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.664476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.664489 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:57Z","lastTransitionTime":"2026-01-31T04:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.767055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.767095 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.767145 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.767176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.767193 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:57Z","lastTransitionTime":"2026-01-31T04:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.858147 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:18:58.582256594 +0000 UTC Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.858290 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:57 crc kubenswrapper[4832]: E0131 04:44:57.858397 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.858432 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.858448 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:57 crc kubenswrapper[4832]: E0131 04:44:57.858600 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:57 crc kubenswrapper[4832]: E0131 04:44:57.858675 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.869772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.869817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.869830 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.869845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.869859 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:57Z","lastTransitionTime":"2026-01-31T04:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.972459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.972501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.972512 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.972530 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:57 crc kubenswrapper[4832]: I0131 04:44:57.972542 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:57Z","lastTransitionTime":"2026-01-31T04:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.075272 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.075333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.075347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.075371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.075384 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:58Z","lastTransitionTime":"2026-01-31T04:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.178019 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.178080 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.178096 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.178122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.178139 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:58Z","lastTransitionTime":"2026-01-31T04:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.280885 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.280959 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.280980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.281008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.281028 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:58Z","lastTransitionTime":"2026-01-31T04:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.383866 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.383910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.383921 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.383938 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.383948 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:58Z","lastTransitionTime":"2026-01-31T04:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.486043 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.486086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.486101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.486123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.486137 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:58Z","lastTransitionTime":"2026-01-31T04:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.588837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.588902 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.588924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.588956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.588979 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:58Z","lastTransitionTime":"2026-01-31T04:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.692489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.692597 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.692658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.692692 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.692716 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:58Z","lastTransitionTime":"2026-01-31T04:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.796508 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.796612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.796637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.796669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.796690 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:58Z","lastTransitionTime":"2026-01-31T04:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.858898 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:44:38.931407679 +0000 UTC Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.859034 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:44:58 crc kubenswrapper[4832]: E0131 04:44:58.859807 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.899760 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.899849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.899882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.899919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:58 crc kubenswrapper[4832]: I0131 04:44:58.899943 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:58Z","lastTransitionTime":"2026-01-31T04:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.002448 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.002507 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.002524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.002550 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.002584 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:59Z","lastTransitionTime":"2026-01-31T04:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.105475 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.105518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.105532 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.105546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.105555 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:59Z","lastTransitionTime":"2026-01-31T04:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.209827 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.209928 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.209956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.209983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.210001 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:59Z","lastTransitionTime":"2026-01-31T04:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.313505 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.313616 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.313640 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.313673 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.313693 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:59Z","lastTransitionTime":"2026-01-31T04:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.416840 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.416897 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.416907 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.416924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.416936 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:59Z","lastTransitionTime":"2026-01-31T04:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.519174 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.519228 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.519244 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.519267 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.519287 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:59Z","lastTransitionTime":"2026-01-31T04:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.622657 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.622698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.622707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.622742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.622753 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:59Z","lastTransitionTime":"2026-01-31T04:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.725086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.725134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.725169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.725188 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.725200 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:59Z","lastTransitionTime":"2026-01-31T04:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.828726 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.828765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.828774 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.828793 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.828803 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:59Z","lastTransitionTime":"2026-01-31T04:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.859257 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:13:19.587401442 +0000 UTC Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.859430 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.859616 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:44:59 crc kubenswrapper[4832]: E0131 04:44:59.859627 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.859651 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:44:59 crc kubenswrapper[4832]: E0131 04:44:59.859709 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:44:59 crc kubenswrapper[4832]: E0131 04:44:59.860147 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.931808 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.931875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.931892 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.931918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:44:59 crc kubenswrapper[4832]: I0131 04:44:59.931937 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:44:59Z","lastTransitionTime":"2026-01-31T04:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.035611 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.035660 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.035673 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.035698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.035709 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:00Z","lastTransitionTime":"2026-01-31T04:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.138543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.138650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.138668 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.138695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.138713 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:00Z","lastTransitionTime":"2026-01-31T04:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.241908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.241974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.241991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.242018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.242036 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:00Z","lastTransitionTime":"2026-01-31T04:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.345030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.345097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.345123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.345155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.345180 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:00Z","lastTransitionTime":"2026-01-31T04:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.448008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.448074 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.448096 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.448122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.448145 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:00Z","lastTransitionTime":"2026-01-31T04:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.551670 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.551731 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.551747 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.551773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.551791 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:00Z","lastTransitionTime":"2026-01-31T04:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.655272 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.655341 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.655357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.655376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.655388 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:00Z","lastTransitionTime":"2026-01-31T04:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.758164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.758238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.758261 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.758289 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.758311 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:00Z","lastTransitionTime":"2026-01-31T04:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.858656 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:00 crc kubenswrapper[4832]: E0131 04:45:00.858904 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.859641 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 16:04:30.545547 +0000 UTC Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.861247 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.861288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.861300 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.861318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.861330 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:00Z","lastTransitionTime":"2026-01-31T04:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.930190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.930224 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.930233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.930245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.930254 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:00Z","lastTransitionTime":"2026-01-31T04:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:00 crc kubenswrapper[4832]: E0131 04:45:00.943851 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:45:00Z is after 2025-08-24T17:21:41Z" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.947743 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.947797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.947811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.947829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.947841 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:00Z","lastTransitionTime":"2026-01-31T04:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:00 crc kubenswrapper[4832]: E0131 04:45:00.962590 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:45:00Z is after 2025-08-24T17:21:41Z" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.967298 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.967357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.967373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.967400 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.967418 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:00Z","lastTransitionTime":"2026-01-31T04:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:00 crc kubenswrapper[4832]: E0131 04:45:00.983493 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:45:00Z is after 2025-08-24T17:21:41Z" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.988262 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.988317 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.988334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.988356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:00 crc kubenswrapper[4832]: I0131 04:45:00.988370 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:00Z","lastTransitionTime":"2026-01-31T04:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:01 crc kubenswrapper[4832]: E0131 04:45:01.002379 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:45:00Z is after 2025-08-24T17:21:41Z" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.007141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.007191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.007206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.007264 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.007277 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:01Z","lastTransitionTime":"2026-01-31T04:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:01 crc kubenswrapper[4832]: E0131 04:45:01.019553 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T04:45:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T04:45:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c783a103-3bac-43f3-9bbb-fd265be6128f\\\",\\\"systemUUID\\\":\\\"31767ebb-3087-408c-bd64-29e9bda9f554\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:45:01Z is after 2025-08-24T17:21:41Z" Jan 31 04:45:01 crc kubenswrapper[4832]: E0131 04:45:01.019716 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.021531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.021614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.021633 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.021662 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.021682 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:01Z","lastTransitionTime":"2026-01-31T04:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.124338 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.124397 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.124416 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.124439 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.124459 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:01Z","lastTransitionTime":"2026-01-31T04:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.228116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.228213 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.228243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.228291 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.228317 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:01Z","lastTransitionTime":"2026-01-31T04:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.331894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.331951 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.331966 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.331989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.332006 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:01Z","lastTransitionTime":"2026-01-31T04:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.435745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.435825 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.435843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.435872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.435891 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:01Z","lastTransitionTime":"2026-01-31T04:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.539038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.539099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.539117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.539143 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.539160 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:01Z","lastTransitionTime":"2026-01-31T04:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.641826 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.641882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.641894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.641912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.641925 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:01Z","lastTransitionTime":"2026-01-31T04:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.744513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.744578 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.744592 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.744609 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.744620 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:01Z","lastTransitionTime":"2026-01-31T04:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.847306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.847344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.847355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.847374 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.847386 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:01Z","lastTransitionTime":"2026-01-31T04:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.858920 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.858920 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.859089 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:01 crc kubenswrapper[4832]: E0131 04:45:01.859214 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:01 crc kubenswrapper[4832]: E0131 04:45:01.859288 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:01 crc kubenswrapper[4832]: E0131 04:45:01.859430 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.859888 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:22:54.144419968 +0000 UTC Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.873076 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nspv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830c4bc3-45df-4e7b-a494-dec77c4318ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://331d8dc909c9231e70e82aed6a8be1b68e568e765dffeddd12781ea7f7519700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T04:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tq9f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nspv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:45:01Z is after 2025-08-24T17:21:41Z" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.885621 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88205cd8-6bbf-40af-a0d1-bfae431d97e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T04:43:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nm6dc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T04:43:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rbg9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T04:45:01Z is after 2025-08-24T17:21:41Z" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.928975 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=89.928950264 podStartE2EDuration="1m29.928950264s" podCreationTimestamp="2026-01-31 04:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:01.927122124 +0000 UTC m=+110.875943839" watchObservedRunningTime="2026-01-31 04:45:01.928950264 +0000 UTC m=+110.877771989" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.952908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.952984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.953007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.953035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:01 crc kubenswrapper[4832]: I0131 04:45:01.953058 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:01Z","lastTransitionTime":"2026-01-31T04:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.009226 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-899xk" podStartSLOduration=89.009204749 podStartE2EDuration="1m29.009204749s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:01.980029208 +0000 UTC m=+110.928850933" watchObservedRunningTime="2026-01-31 04:45:02.009204749 +0000 UTC m=+110.958026444" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.057396 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.057440 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.057453 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.057471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.057481 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:02Z","lastTransitionTime":"2026-01-31T04:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.076434 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podStartSLOduration=89.076411259 podStartE2EDuration="1m29.076411259s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:02.0761234 +0000 UTC m=+111.024945085" watchObservedRunningTime="2026-01-31 04:45:02.076411259 +0000 UTC m=+111.025232944" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.088037 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rxzd6" podStartSLOduration=89.088007437 podStartE2EDuration="1m29.088007437s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:02.087655105 +0000 UTC m=+111.036476810" watchObservedRunningTime="2026-01-31 04:45:02.088007437 +0000 UTC m=+111.036829122" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.118479 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=93.118454889 podStartE2EDuration="1m33.118454889s" podCreationTimestamp="2026-01-31 04:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:02.117492797 +0000 UTC m=+111.066314492" watchObservedRunningTime="2026-01-31 04:45:02.118454889 +0000 UTC m=+111.067276574" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.145984 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.145948295 podStartE2EDuration="1m29.145948295s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:02.141110657 +0000 UTC m=+111.089932342" watchObservedRunningTime="2026-01-31 04:45:02.145948295 +0000 UTC m=+111.094769980" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.156116 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.156079505 podStartE2EDuration="58.156079505s" podCreationTimestamp="2026-01-31 04:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:02.155425714 +0000 UTC m=+111.104247429" watchObservedRunningTime="2026-01-31 04:45:02.156079505 +0000 UTC m=+111.104901210" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.160902 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.160969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.160987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.161011 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.161030 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:02Z","lastTransitionTime":"2026-01-31T04:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.192866 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-qk99s" podStartSLOduration=90.192844723 podStartE2EDuration="1m30.192844723s" podCreationTimestamp="2026-01-31 04:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:02.169694669 +0000 UTC m=+111.118516374" watchObservedRunningTime="2026-01-31 04:45:02.192844723 +0000 UTC m=+111.141666408" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.217052 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-frk6z" podStartSLOduration=89.217025511 podStartE2EDuration="1m29.217025511s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:02.194383263 +0000 UTC m=+111.143204978" watchObservedRunningTime="2026-01-31 04:45:02.217025511 +0000 UTC m=+111.165847196" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.263915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.263970 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.263980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.263998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.264010 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:02Z","lastTransitionTime":"2026-01-31T04:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.366980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.367044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.367057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.367077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.367092 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:02Z","lastTransitionTime":"2026-01-31T04:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.469692 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.469754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.469769 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.469791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.469809 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:02Z","lastTransitionTime":"2026-01-31T04:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.573236 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.573280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.573289 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.573306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.573315 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:02Z","lastTransitionTime":"2026-01-31T04:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.677246 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.677292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.677301 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.677319 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.677329 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:02Z","lastTransitionTime":"2026-01-31T04:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.780107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.780150 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.780187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.780206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.780218 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:02Z","lastTransitionTime":"2026-01-31T04:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.858822 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:02 crc kubenswrapper[4832]: E0131 04:45:02.859000 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.860936 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:35:54.922019856 +0000 UTC Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.883014 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.883112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.883132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.883164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.883183 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:02Z","lastTransitionTime":"2026-01-31T04:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.986184 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.986227 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.986239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.986259 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:02 crc kubenswrapper[4832]: I0131 04:45:02.986275 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:02Z","lastTransitionTime":"2026-01-31T04:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.088796 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.088850 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.088868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.088894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.088917 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:03Z","lastTransitionTime":"2026-01-31T04:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.191851 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.191899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.191918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.191942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.191958 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:03Z","lastTransitionTime":"2026-01-31T04:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.294785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.294851 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.294871 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.294900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.294919 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:03Z","lastTransitionTime":"2026-01-31T04:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.397497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.397593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.397611 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.397641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.397661 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:03Z","lastTransitionTime":"2026-01-31T04:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.501116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.501169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.501180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.501196 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.501212 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:03Z","lastTransitionTime":"2026-01-31T04:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.604901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.604968 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.604986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.605012 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.605030 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:03Z","lastTransitionTime":"2026-01-31T04:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.707668 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.707786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.707804 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.707865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.707901 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:03Z","lastTransitionTime":"2026-01-31T04:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.811082 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.811119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.811129 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.811146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.811156 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:03Z","lastTransitionTime":"2026-01-31T04:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.859359 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.859409 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.859367 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:03 crc kubenswrapper[4832]: E0131 04:45:03.859546 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:03 crc kubenswrapper[4832]: E0131 04:45:03.859625 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:03 crc kubenswrapper[4832]: E0131 04:45:03.859707 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.861665 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:24:38.223741334 +0000 UTC Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.914263 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.914296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.914305 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.914327 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:03 crc kubenswrapper[4832]: I0131 04:45:03.914339 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:03Z","lastTransitionTime":"2026-01-31T04:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.017869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.017925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.017944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.017970 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.017988 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:04Z","lastTransitionTime":"2026-01-31T04:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.120409 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.120450 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.120461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.120479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.120492 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:04Z","lastTransitionTime":"2026-01-31T04:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.223518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.223609 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.223627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.223653 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.223673 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:04Z","lastTransitionTime":"2026-01-31T04:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.326534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.326641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.326665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.326697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.326722 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:04Z","lastTransitionTime":"2026-01-31T04:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.430110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.430173 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.430198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.430227 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.430249 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:04Z","lastTransitionTime":"2026-01-31T04:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.531952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.532002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.532019 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.532045 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.532062 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:04Z","lastTransitionTime":"2026-01-31T04:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.635611 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.635656 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.635670 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.635685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.635695 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:04Z","lastTransitionTime":"2026-01-31T04:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.739134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.739192 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.739210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.739242 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.739266 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:04Z","lastTransitionTime":"2026-01-31T04:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.842640 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.842712 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.842732 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.842757 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.842775 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:04Z","lastTransitionTime":"2026-01-31T04:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.859372 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:04 crc kubenswrapper[4832]: E0131 04:45:04.859626 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.862497 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:48:51.114612935 +0000 UTC Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.946343 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.946396 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.946413 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.946441 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:04 crc kubenswrapper[4832]: I0131 04:45:04.946458 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:04Z","lastTransitionTime":"2026-01-31T04:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.048706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.048754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.048763 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.048781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.048791 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:05Z","lastTransitionTime":"2026-01-31T04:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.151697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.151735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.151743 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.151758 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.151770 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:05Z","lastTransitionTime":"2026-01-31T04:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.254906 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.254967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.254983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.255007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.255029 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:05Z","lastTransitionTime":"2026-01-31T04:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.357201 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.357234 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.357242 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.357255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.357264 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:05Z","lastTransitionTime":"2026-01-31T04:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.459475 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.459537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.459583 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.459611 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.459640 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:05Z","lastTransitionTime":"2026-01-31T04:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.562925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.562989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.563008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.563034 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.563052 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:05Z","lastTransitionTime":"2026-01-31T04:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.666254 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.666322 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.666342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.666366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.666383 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:05Z","lastTransitionTime":"2026-01-31T04:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.768897 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.769010 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.769031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.769057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.769080 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:05Z","lastTransitionTime":"2026-01-31T04:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.858368 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.858448 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.858368 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:05 crc kubenswrapper[4832]: E0131 04:45:05.858681 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:05 crc kubenswrapper[4832]: E0131 04:45:05.858820 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:05 crc kubenswrapper[4832]: E0131 04:45:05.858930 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.862841 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:02:19.205914207 +0000 UTC Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.872435 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.872488 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.872505 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.872528 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.872545 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:05Z","lastTransitionTime":"2026-01-31T04:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.976120 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.976187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.976212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.976246 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:05 crc kubenswrapper[4832]: I0131 04:45:05.976271 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:05Z","lastTransitionTime":"2026-01-31T04:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.080109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.080175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.080196 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.080223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.080246 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:06Z","lastTransitionTime":"2026-01-31T04:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.183391 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.183448 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.183466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.183511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.183530 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:06Z","lastTransitionTime":"2026-01-31T04:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.286383 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.286442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.286458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.286483 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.286500 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:06Z","lastTransitionTime":"2026-01-31T04:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.390342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.390399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.390420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.390446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.390463 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:06Z","lastTransitionTime":"2026-01-31T04:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.494051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.494122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.494175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.494202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.494220 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:06Z","lastTransitionTime":"2026-01-31T04:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.597034 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.597086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.597103 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.597127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.597143 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:06Z","lastTransitionTime":"2026-01-31T04:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.699861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.699923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.699945 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.699974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.699994 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:06Z","lastTransitionTime":"2026-01-31T04:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.803377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.803407 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.803417 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.803433 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.803444 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:06Z","lastTransitionTime":"2026-01-31T04:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.858904 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:06 crc kubenswrapper[4832]: E0131 04:45:06.859078 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.864017 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 03:31:49.363215146 +0000 UTC Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.906159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.906206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.906218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.906239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:06 crc kubenswrapper[4832]: I0131 04:45:06.906252 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:06Z","lastTransitionTime":"2026-01-31T04:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.009469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.009537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.009593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.009628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.009649 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:07Z","lastTransitionTime":"2026-01-31T04:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.113659 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.113720 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.113738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.113761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.113837 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:07Z","lastTransitionTime":"2026-01-31T04:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.216592 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.216665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.216690 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.216722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.216747 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:07Z","lastTransitionTime":"2026-01-31T04:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.321461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.321511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.321529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.321554 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.321600 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:07Z","lastTransitionTime":"2026-01-31T04:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.426554 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.426706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.426737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.426764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.426784 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:07Z","lastTransitionTime":"2026-01-31T04:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.530242 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.530371 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.530390 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.530420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.530476 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:07Z","lastTransitionTime":"2026-01-31T04:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.633942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.633985 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.633998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.634018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.634033 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:07Z","lastTransitionTime":"2026-01-31T04:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.737597 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.737669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.737694 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.737724 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.737745 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:07Z","lastTransitionTime":"2026-01-31T04:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.840979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.841038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.841054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.841081 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.841099 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:07Z","lastTransitionTime":"2026-01-31T04:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.859173 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.859256 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.859256 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:07 crc kubenswrapper[4832]: E0131 04:45:07.859369 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:07 crc kubenswrapper[4832]: E0131 04:45:07.859548 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:07 crc kubenswrapper[4832]: E0131 04:45:07.859682 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.864186 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 16:55:49.614687641 +0000 UTC Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.944368 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.944426 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.944449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.944479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:07 crc kubenswrapper[4832]: I0131 04:45:07.944502 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:07Z","lastTransitionTime":"2026-01-31T04:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.047603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.047680 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.047715 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.047750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.047771 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:08Z","lastTransitionTime":"2026-01-31T04:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.151379 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.151446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.151469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.151503 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.151521 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:08Z","lastTransitionTime":"2026-01-31T04:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.254829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.254899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.254915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.254940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.254957 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:08Z","lastTransitionTime":"2026-01-31T04:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.358685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.358742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.358766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.358796 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.358818 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:08Z","lastTransitionTime":"2026-01-31T04:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.461613 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.461672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.461691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.461716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.461733 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:08Z","lastTransitionTime":"2026-01-31T04:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.564323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.564391 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.564415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.564443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.564463 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:08Z","lastTransitionTime":"2026-01-31T04:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.668202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.668278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.668302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.668331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.668352 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:08Z","lastTransitionTime":"2026-01-31T04:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.771658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.771720 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.771744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.771781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.771801 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:08Z","lastTransitionTime":"2026-01-31T04:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.858733 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:08 crc kubenswrapper[4832]: E0131 04:45:08.859656 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.860290 4832 scope.go:117] "RemoveContainer" containerID="789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296" Jan 31 04:45:08 crc kubenswrapper[4832]: E0131 04:45:08.860624 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.865022 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:43:05.230255063 +0000 UTC Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.874159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.874199 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.874213 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.874229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.874244 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:08Z","lastTransitionTime":"2026-01-31T04:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.978019 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.978091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.978110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.978136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:08 crc kubenswrapper[4832]: I0131 04:45:08.978154 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:08Z","lastTransitionTime":"2026-01-31T04:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.080924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.080978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.081001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.081029 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.081050 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:09Z","lastTransitionTime":"2026-01-31T04:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.184165 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.184210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.184222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.184242 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.184254 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:09Z","lastTransitionTime":"2026-01-31T04:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.287768 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.287849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.287880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.287913 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.287936 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:09Z","lastTransitionTime":"2026-01-31T04:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.391655 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.391747 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.391770 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.391798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.391816 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:09Z","lastTransitionTime":"2026-01-31T04:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.495365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.495410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.495420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.495441 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.495451 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:09Z","lastTransitionTime":"2026-01-31T04:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.598591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.598636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.598648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.598667 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.598678 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:09Z","lastTransitionTime":"2026-01-31T04:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.701212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.701276 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.701290 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.701311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.701327 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:09Z","lastTransitionTime":"2026-01-31T04:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.804288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.804356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.804374 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.804403 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.804422 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:09Z","lastTransitionTime":"2026-01-31T04:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.859222 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:09 crc kubenswrapper[4832]: E0131 04:45:09.859455 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.859537 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:09 crc kubenswrapper[4832]: E0131 04:45:09.859866 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.860186 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:09 crc kubenswrapper[4832]: E0131 04:45:09.860620 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.865147 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:55:47.914698836 +0000 UTC Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.907763 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.907839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.907855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.907882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:09 crc kubenswrapper[4832]: I0131 04:45:09.907898 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:09Z","lastTransitionTime":"2026-01-31T04:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.011444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.011539 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.011552 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.011611 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.011629 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:10Z","lastTransitionTime":"2026-01-31T04:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.114800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.114864 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.114882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.114944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.114966 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:10Z","lastTransitionTime":"2026-01-31T04:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.218478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.218585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.218640 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.218672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.218691 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:10Z","lastTransitionTime":"2026-01-31T04:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.322464 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.322509 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.322518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.322538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.322547 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:10Z","lastTransitionTime":"2026-01-31T04:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.425429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.425512 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.425543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.425618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.425644 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:10Z","lastTransitionTime":"2026-01-31T04:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.528823 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.528866 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.528879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.528899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.528915 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:10Z","lastTransitionTime":"2026-01-31T04:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.554875 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-frk6z_df4dafae-fa72-4f03-8531-93538336b0cd/kube-multus/1.log" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.555910 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-frk6z_df4dafae-fa72-4f03-8531-93538336b0cd/kube-multus/0.log" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.555970 4832 generic.go:334] "Generic (PLEG): container finished" podID="df4dafae-fa72-4f03-8531-93538336b0cd" containerID="2280a590a254679f2240bf2fb2aa06633b56575b436f18ea8be3bd8912598faf" exitCode=1 Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.556005 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-frk6z" event={"ID":"df4dafae-fa72-4f03-8531-93538336b0cd","Type":"ContainerDied","Data":"2280a590a254679f2240bf2fb2aa06633b56575b436f18ea8be3bd8912598faf"} Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.556053 4832 scope.go:117] "RemoveContainer" containerID="fb13d2b44dc9ea0527471f2c9ccebd1c45ceb7494f75b81b36fe96e6095cb0f1" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.556722 4832 scope.go:117] "RemoveContainer" containerID="2280a590a254679f2240bf2fb2aa06633b56575b436f18ea8be3bd8912598faf" Jan 31 04:45:10 crc kubenswrapper[4832]: E0131 04:45:10.556931 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-frk6z_openshift-multus(df4dafae-fa72-4f03-8531-93538336b0cd)\"" pod="openshift-multus/multus-frk6z" podUID="df4dafae-fa72-4f03-8531-93538336b0cd" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.591248 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=50.591220742 podStartE2EDuration="50.591220742s" podCreationTimestamp="2026-01-31 04:44:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:02.216905217 +0000 UTC m=+111.165726912" watchObservedRunningTime="2026-01-31 04:45:10.591220742 +0000 UTC m=+119.540042467" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.606228 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-nspv9" podStartSLOduration=97.606201241 podStartE2EDuration="1m37.606201241s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:10.604243927 +0000 UTC m=+119.553065652" watchObservedRunningTime="2026-01-31 04:45:10.606201241 +0000 UTC m=+119.555022966" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.632408 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.632831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.633023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.633208 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.633387 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:10Z","lastTransitionTime":"2026-01-31T04:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.736387 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.736466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.736480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.736500 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.736516 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:10Z","lastTransitionTime":"2026-01-31T04:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.839346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.839432 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.839471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.839504 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.839540 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:10Z","lastTransitionTime":"2026-01-31T04:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.859290 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:10 crc kubenswrapper[4832]: E0131 04:45:10.859467 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.865341 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 00:46:40.06437844 +0000 UTC Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.942215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.942248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.942260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.942276 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:10 crc kubenswrapper[4832]: I0131 04:45:10.942287 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:10Z","lastTransitionTime":"2026-01-31T04:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.044855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.045208 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.045311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.045405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.045555 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:11Z","lastTransitionTime":"2026-01-31T04:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.090839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.090882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.090895 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.090915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.090928 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T04:45:11Z","lastTransitionTime":"2026-01-31T04:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.147224 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw"] Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.147683 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.151228 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.151270 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.151548 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.151684 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.297151 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3aaec885-b07b-4ef4-99b8-a4a550079d18-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rnksw\" (UID: \"3aaec885-b07b-4ef4-99b8-a4a550079d18\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.297633 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3aaec885-b07b-4ef4-99b8-a4a550079d18-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rnksw\" (UID: \"3aaec885-b07b-4ef4-99b8-a4a550079d18\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.297781 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3aaec885-b07b-4ef4-99b8-a4a550079d18-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rnksw\" (UID: \"3aaec885-b07b-4ef4-99b8-a4a550079d18\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.297939 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3aaec885-b07b-4ef4-99b8-a4a550079d18-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rnksw\" (UID: \"3aaec885-b07b-4ef4-99b8-a4a550079d18\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.298043 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3aaec885-b07b-4ef4-99b8-a4a550079d18-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rnksw\" (UID: \"3aaec885-b07b-4ef4-99b8-a4a550079d18\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.399687 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3aaec885-b07b-4ef4-99b8-a4a550079d18-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rnksw\" (UID: \"3aaec885-b07b-4ef4-99b8-a4a550079d18\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.399757 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3aaec885-b07b-4ef4-99b8-a4a550079d18-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rnksw\" (UID: \"3aaec885-b07b-4ef4-99b8-a4a550079d18\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.399821 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3aaec885-b07b-4ef4-99b8-a4a550079d18-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rnksw\" (UID: \"3aaec885-b07b-4ef4-99b8-a4a550079d18\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.399873 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3aaec885-b07b-4ef4-99b8-a4a550079d18-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rnksw\" (UID: \"3aaec885-b07b-4ef4-99b8-a4a550079d18\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.399919 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3aaec885-b07b-4ef4-99b8-a4a550079d18-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rnksw\" (UID: \"3aaec885-b07b-4ef4-99b8-a4a550079d18\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.400038 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3aaec885-b07b-4ef4-99b8-a4a550079d18-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rnksw\" (UID: \"3aaec885-b07b-4ef4-99b8-a4a550079d18\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.400152 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3aaec885-b07b-4ef4-99b8-a4a550079d18-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rnksw\" (UID: \"3aaec885-b07b-4ef4-99b8-a4a550079d18\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.401828 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3aaec885-b07b-4ef4-99b8-a4a550079d18-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rnksw\" (UID: \"3aaec885-b07b-4ef4-99b8-a4a550079d18\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.409991 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3aaec885-b07b-4ef4-99b8-a4a550079d18-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rnksw\" (UID: \"3aaec885-b07b-4ef4-99b8-a4a550079d18\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.420140 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3aaec885-b07b-4ef4-99b8-a4a550079d18-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rnksw\" (UID: \"3aaec885-b07b-4ef4-99b8-a4a550079d18\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.472112 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" Jan 31 04:45:11 crc kubenswrapper[4832]: W0131 04:45:11.494385 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aaec885_b07b_4ef4_99b8_a4a550079d18.slice/crio-483381fab7b0814bc8054918ce2813cc462e86b1ee0a057fdcb309c52551ca12 WatchSource:0}: Error finding container 483381fab7b0814bc8054918ce2813cc462e86b1ee0a057fdcb309c52551ca12: Status 404 returned error can't find the container with id 483381fab7b0814bc8054918ce2813cc462e86b1ee0a057fdcb309c52551ca12 Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.562650 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-frk6z_df4dafae-fa72-4f03-8531-93538336b0cd/kube-multus/1.log" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.563816 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" event={"ID":"3aaec885-b07b-4ef4-99b8-a4a550079d18","Type":"ContainerStarted","Data":"483381fab7b0814bc8054918ce2813cc462e86b1ee0a057fdcb309c52551ca12"} Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.858924 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.859404 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.859985 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:11 crc kubenswrapper[4832]: E0131 04:45:11.859996 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:11 crc kubenswrapper[4832]: E0131 04:45:11.860334 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:11 crc kubenswrapper[4832]: E0131 04:45:11.860711 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.865446 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:18:34.73521553 +0000 UTC Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.865489 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 31 04:45:11 crc kubenswrapper[4832]: E0131 04:45:11.868252 4832 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 31 04:45:11 crc kubenswrapper[4832]: I0131 04:45:11.874462 4832 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 04:45:11 crc kubenswrapper[4832]: E0131 04:45:11.995295 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 04:45:12 crc kubenswrapper[4832]: I0131 04:45:12.569652 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" event={"ID":"3aaec885-b07b-4ef4-99b8-a4a550079d18","Type":"ContainerStarted","Data":"b9802d024172215b03d063269dd3d1c123b45edb5c4c9af3d9be9e63298571b9"} Jan 31 04:45:12 crc kubenswrapper[4832]: I0131 04:45:12.594039 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rnksw" podStartSLOduration=99.594016413 podStartE2EDuration="1m39.594016413s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:12.592821174 +0000 UTC m=+121.541642869" watchObservedRunningTime="2026-01-31 04:45:12.594016413 +0000 UTC m=+121.542838138" Jan 31 04:45:12 crc kubenswrapper[4832]: I0131 04:45:12.859425 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:12 crc kubenswrapper[4832]: E0131 04:45:12.861734 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:13 crc kubenswrapper[4832]: I0131 04:45:13.859208 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:13 crc kubenswrapper[4832]: I0131 04:45:13.859317 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:13 crc kubenswrapper[4832]: E0131 04:45:13.859397 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:13 crc kubenswrapper[4832]: E0131 04:45:13.859483 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:13 crc kubenswrapper[4832]: I0131 04:45:13.859905 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:13 crc kubenswrapper[4832]: E0131 04:45:13.860105 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:14 crc kubenswrapper[4832]: I0131 04:45:14.858977 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:14 crc kubenswrapper[4832]: E0131 04:45:14.859663 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:15 crc kubenswrapper[4832]: I0131 04:45:15.858971 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:15 crc kubenswrapper[4832]: I0131 04:45:15.858971 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:15 crc kubenswrapper[4832]: E0131 04:45:15.859433 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:15 crc kubenswrapper[4832]: E0131 04:45:15.859599 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:15 crc kubenswrapper[4832]: I0131 04:45:15.859073 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:15 crc kubenswrapper[4832]: E0131 04:45:15.860225 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:16 crc kubenswrapper[4832]: I0131 04:45:16.858499 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:16 crc kubenswrapper[4832]: E0131 04:45:16.858724 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:16 crc kubenswrapper[4832]: E0131 04:45:16.996811 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 04:45:17 crc kubenswrapper[4832]: I0131 04:45:17.859200 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:17 crc kubenswrapper[4832]: I0131 04:45:17.859261 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:17 crc kubenswrapper[4832]: I0131 04:45:17.859261 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:17 crc kubenswrapper[4832]: E0131 04:45:17.859453 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:17 crc kubenswrapper[4832]: E0131 04:45:17.859641 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:17 crc kubenswrapper[4832]: E0131 04:45:17.859812 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:18 crc kubenswrapper[4832]: I0131 04:45:18.859016 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:18 crc kubenswrapper[4832]: E0131 04:45:18.859399 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:19 crc kubenswrapper[4832]: I0131 04:45:19.859337 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:19 crc kubenswrapper[4832]: E0131 04:45:19.859774 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:19 crc kubenswrapper[4832]: I0131 04:45:19.859967 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:19 crc kubenswrapper[4832]: E0131 04:45:19.860024 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:19 crc kubenswrapper[4832]: I0131 04:45:19.860135 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:19 crc kubenswrapper[4832]: E0131 04:45:19.860176 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:20 crc kubenswrapper[4832]: I0131 04:45:20.858512 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:20 crc kubenswrapper[4832]: E0131 04:45:20.858759 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:20 crc kubenswrapper[4832]: I0131 04:45:20.860113 4832 scope.go:117] "RemoveContainer" containerID="789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296" Jan 31 04:45:20 crc kubenswrapper[4832]: E0131 04:45:20.860511 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7gvmz_openshift-ovn-kubernetes(e089fa33-e032-4755-8b7e-262adfecc82f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" Jan 31 04:45:21 crc kubenswrapper[4832]: I0131 04:45:21.859104 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:21 crc kubenswrapper[4832]: I0131 04:45:21.859174 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:21 crc kubenswrapper[4832]: I0131 04:45:21.859132 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:21 crc kubenswrapper[4832]: E0131 04:45:21.859332 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:21 crc kubenswrapper[4832]: E0131 04:45:21.861720 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:21 crc kubenswrapper[4832]: I0131 04:45:21.862134 4832 scope.go:117] "RemoveContainer" containerID="2280a590a254679f2240bf2fb2aa06633b56575b436f18ea8be3bd8912598faf" Jan 31 04:45:21 crc kubenswrapper[4832]: E0131 04:45:21.862158 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:21 crc kubenswrapper[4832]: E0131 04:45:21.998077 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 04:45:22 crc kubenswrapper[4832]: I0131 04:45:22.604662 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-frk6z_df4dafae-fa72-4f03-8531-93538336b0cd/kube-multus/1.log" Jan 31 04:45:22 crc kubenswrapper[4832]: I0131 04:45:22.604748 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-frk6z" event={"ID":"df4dafae-fa72-4f03-8531-93538336b0cd","Type":"ContainerStarted","Data":"43aa9277c62f2623e6a18c4d5b2b0b72592088c9d1621dc5dff55fc1304d725f"} Jan 31 04:45:22 crc kubenswrapper[4832]: I0131 04:45:22.859036 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:22 crc kubenswrapper[4832]: E0131 04:45:22.859216 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:23 crc kubenswrapper[4832]: I0131 04:45:23.859246 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:23 crc kubenswrapper[4832]: I0131 04:45:23.859246 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:23 crc kubenswrapper[4832]: E0131 04:45:23.859378 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:23 crc kubenswrapper[4832]: I0131 04:45:23.859453 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:23 crc kubenswrapper[4832]: E0131 04:45:23.859519 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:23 crc kubenswrapper[4832]: E0131 04:45:23.859610 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:24 crc kubenswrapper[4832]: I0131 04:45:24.859282 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:24 crc kubenswrapper[4832]: E0131 04:45:24.859465 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:25 crc kubenswrapper[4832]: I0131 04:45:25.858674 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:25 crc kubenswrapper[4832]: I0131 04:45:25.858751 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:25 crc kubenswrapper[4832]: I0131 04:45:25.858792 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:25 crc kubenswrapper[4832]: E0131 04:45:25.858828 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:25 crc kubenswrapper[4832]: E0131 04:45:25.858970 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:25 crc kubenswrapper[4832]: E0131 04:45:25.859130 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:26 crc kubenswrapper[4832]: I0131 04:45:26.858606 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:26 crc kubenswrapper[4832]: E0131 04:45:26.858778 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:26 crc kubenswrapper[4832]: E0131 04:45:26.999864 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 04:45:27 crc kubenswrapper[4832]: I0131 04:45:27.858703 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:27 crc kubenswrapper[4832]: I0131 04:45:27.858743 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:27 crc kubenswrapper[4832]: I0131 04:45:27.858739 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:27 crc kubenswrapper[4832]: E0131 04:45:27.858921 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:27 crc kubenswrapper[4832]: E0131 04:45:27.859078 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:27 crc kubenswrapper[4832]: E0131 04:45:27.859135 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:28 crc kubenswrapper[4832]: I0131 04:45:28.858976 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:28 crc kubenswrapper[4832]: E0131 04:45:28.863533 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:29 crc kubenswrapper[4832]: I0131 04:45:29.859401 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:29 crc kubenswrapper[4832]: I0131 04:45:29.859461 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:29 crc kubenswrapper[4832]: E0131 04:45:29.859540 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:29 crc kubenswrapper[4832]: I0131 04:45:29.859648 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:29 crc kubenswrapper[4832]: E0131 04:45:29.859945 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:29 crc kubenswrapper[4832]: E0131 04:45:29.860296 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:30 crc kubenswrapper[4832]: I0131 04:45:30.858323 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:30 crc kubenswrapper[4832]: E0131 04:45:30.858722 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:31 crc kubenswrapper[4832]: I0131 04:45:31.858520 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:31 crc kubenswrapper[4832]: I0131 04:45:31.858625 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:31 crc kubenswrapper[4832]: I0131 04:45:31.858682 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:31 crc kubenswrapper[4832]: E0131 04:45:31.859336 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:31 crc kubenswrapper[4832]: E0131 04:45:31.859504 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:31 crc kubenswrapper[4832]: E0131 04:45:31.859544 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:32 crc kubenswrapper[4832]: E0131 04:45:32.001011 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 04:45:32 crc kubenswrapper[4832]: I0131 04:45:32.859370 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:32 crc kubenswrapper[4832]: E0131 04:45:32.859927 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:33 crc kubenswrapper[4832]: I0131 04:45:33.858847 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:33 crc kubenswrapper[4832]: I0131 04:45:33.858870 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:33 crc kubenswrapper[4832]: E0131 04:45:33.859140 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:33 crc kubenswrapper[4832]: I0131 04:45:33.859243 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:33 crc kubenswrapper[4832]: E0131 04:45:33.859460 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:33 crc kubenswrapper[4832]: E0131 04:45:33.860175 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:33 crc kubenswrapper[4832]: I0131 04:45:33.860515 4832 scope.go:117] "RemoveContainer" containerID="789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296" Jan 31 04:45:34 crc kubenswrapper[4832]: I0131 04:45:34.654541 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovnkube-controller/3.log" Jan 31 04:45:34 crc kubenswrapper[4832]: I0131 04:45:34.659139 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerStarted","Data":"8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6"} Jan 31 04:45:34 crc kubenswrapper[4832]: I0131 04:45:34.659616 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:45:34 crc kubenswrapper[4832]: I0131 04:45:34.687423 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podStartSLOduration=121.687397976 podStartE2EDuration="2m1.687397976s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:34.686477219 +0000 UTC m=+143.635298904" watchObservedRunningTime="2026-01-31 04:45:34.687397976 +0000 UTC m=+143.636219681" Jan 31 04:45:34 crc kubenswrapper[4832]: I0131 04:45:34.858981 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:34 crc kubenswrapper[4832]: E0131 04:45:34.859153 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:34 crc kubenswrapper[4832]: I0131 04:45:34.888827 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rbg9h"] Jan 31 04:45:34 crc kubenswrapper[4832]: I0131 04:45:34.888940 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:34 crc kubenswrapper[4832]: E0131 04:45:34.889032 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:35 crc kubenswrapper[4832]: I0131 04:45:35.858634 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:35 crc kubenswrapper[4832]: I0131 04:45:35.858733 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:35 crc kubenswrapper[4832]: E0131 04:45:35.859672 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:35 crc kubenswrapper[4832]: E0131 04:45:35.859672 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:36 crc kubenswrapper[4832]: I0131 04:45:36.858825 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:36 crc kubenswrapper[4832]: I0131 04:45:36.858867 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:36 crc kubenswrapper[4832]: E0131 04:45:36.859001 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:36 crc kubenswrapper[4832]: E0131 04:45:36.859143 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:37 crc kubenswrapper[4832]: E0131 04:45:37.002324 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 04:45:37 crc kubenswrapper[4832]: I0131 04:45:37.860349 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:37 crc kubenswrapper[4832]: I0131 04:45:37.860363 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:37 crc kubenswrapper[4832]: E0131 04:45:37.860590 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:37 crc kubenswrapper[4832]: E0131 04:45:37.860773 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:38 crc kubenswrapper[4832]: I0131 04:45:38.859196 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:38 crc kubenswrapper[4832]: I0131 04:45:38.859220 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:38 crc kubenswrapper[4832]: E0131 04:45:38.859497 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:38 crc kubenswrapper[4832]: E0131 04:45:38.859702 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:39 crc kubenswrapper[4832]: I0131 04:45:39.858898 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:39 crc kubenswrapper[4832]: E0131 04:45:39.859292 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:39 crc kubenswrapper[4832]: I0131 04:45:39.859387 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:39 crc kubenswrapper[4832]: E0131 04:45:39.859668 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:40 crc kubenswrapper[4832]: I0131 04:45:40.224528 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:40 crc kubenswrapper[4832]: I0131 04:45:40.224695 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:40 crc kubenswrapper[4832]: I0131 04:45:40.224734 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:40 crc kubenswrapper[4832]: E0131 04:45:40.224860 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:45:40 crc kubenswrapper[4832]: E0131 04:45:40.224895 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:47:42.224828224 +0000 UTC m=+271.173649959 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:40 crc kubenswrapper[4832]: E0131 04:45:40.224925 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:45:40 crc kubenswrapper[4832]: E0131 04:45:40.224971 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:47:42.224947178 +0000 UTC m=+271.173768913 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 04:45:40 crc kubenswrapper[4832]: E0131 04:45:40.225046 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 04:47:42.2250171 +0000 UTC m=+271.173838805 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 04:45:40 crc kubenswrapper[4832]: I0131 04:45:40.326244 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:40 crc kubenswrapper[4832]: I0131 04:45:40.326304 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:40 crc kubenswrapper[4832]: E0131 04:45:40.326482 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:45:40 crc kubenswrapper[4832]: E0131 04:45:40.326503 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:45:40 crc kubenswrapper[4832]: E0131 04:45:40.326500 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 04:45:40 crc kubenswrapper[4832]: E0131 04:45:40.326593 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 04:45:40 crc kubenswrapper[4832]: E0131 04:45:40.326611 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:45:40 crc kubenswrapper[4832]: E0131 04:45:40.326518 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:45:40 crc kubenswrapper[4832]: E0131 04:45:40.326692 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 04:47:42.326669409 +0000 UTC m=+271.275491154 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:45:40 crc kubenswrapper[4832]: E0131 04:45:40.326746 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 04:47:42.326727111 +0000 UTC m=+271.275548806 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 04:45:40 crc kubenswrapper[4832]: I0131 04:45:40.858714 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:40 crc kubenswrapper[4832]: I0131 04:45:40.858756 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:40 crc kubenswrapper[4832]: E0131 04:45:40.858920 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:45:40 crc kubenswrapper[4832]: E0131 04:45:40.859150 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rbg9h" podUID="88205cd8-6bbf-40af-a0d1-bfae431d97e7" Jan 31 04:45:41 crc kubenswrapper[4832]: I0131 04:45:41.859185 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:41 crc kubenswrapper[4832]: I0131 04:45:41.859292 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:41 crc kubenswrapper[4832]: E0131 04:45:41.861049 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 04:45:41 crc kubenswrapper[4832]: E0131 04:45:41.861280 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.009036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.064296 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v6mt8"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.064855 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fkwvm"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.065097 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.065398 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.065812 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.066214 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.076453 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.079836 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vn2qs"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.080378 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.080737 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.080737 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.092090 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.092202 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.092686 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.093203 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.119222 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.120097 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-94sbp"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.120236 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.120372 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.120459 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.120575 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.120662 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.120847 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.121012 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.121105 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.121647 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.121803 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.124350 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.124541 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.124672 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.125396 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.125674 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.125849 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.126004 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.126117 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.126304 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.126407 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.126581 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.126717 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.126860 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.127036 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.127182 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.127306 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.128310 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98tzb"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.128623 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.128928 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98tzb" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.131021 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vs629"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.131338 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tphpp"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.131782 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.132663 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.132710 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.132798 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.132836 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.134345 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.136393 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.136535 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.136674 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.136833 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.137157 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.137299 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.137423 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.137450 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.137521 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.137588 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.137686 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.137817 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.137424 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.137990 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.138229 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.138434 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.138446 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tphpp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.138535 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.140575 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.140752 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.140868 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.140950 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jkgd6"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.141548 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.141842 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.141962 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.142110 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.144007 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.145472 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.146290 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.146428 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.146599 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.146723 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.146823 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.147433 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-sjkqt"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148156 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148267 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148307 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-encryption-config\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148349 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-audit-policies\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148371 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-audit-policies\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148388 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmz24\" (UniqueName: \"kubernetes.io/projected/cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf-kube-api-access-nmz24\") pod \"openshift-apiserver-operator-796bbdcf4f-x6bgt\" (UID: \"cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148406 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d08c2681-edcf-4634-aede-63eb081e72a0-serving-cert\") pod \"controller-manager-879f6c89f-fkwvm\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148421 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqqg9\" (UniqueName: \"kubernetes.io/projected/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-kube-api-access-mqqg9\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148438 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2fkh\" (UniqueName: \"kubernetes.io/projected/577e3549-41e2-4af0-9b37-807d419dfbb9-kube-api-access-v2fkh\") pod \"downloads-7954f5f757-tphpp\" (UID: \"577e3549-41e2-4af0-9b37-807d419dfbb9\") " pod="openshift-console/downloads-7954f5f757-tphpp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148455 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148470 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148479 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x6bgt\" (UID: \"cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148495 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l57p\" (UniqueName: \"kubernetes.io/projected/9ed37686-689b-46e5-8069-0e4de3519afb-kube-api-access-5l57p\") pod \"machine-api-operator-5694c8668f-vn2qs\" (UID: \"9ed37686-689b-46e5-8069-0e4de3519afb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148511 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-audit-dir\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148528 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/da3cda05-4158-4df2-93ab-6526af2232ba-audit\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148543 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da3cda05-4158-4df2-93ab-6526af2232ba-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148577 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a7956db-4568-4aa9-806f-b6eb458a1562-serving-cert\") pod \"authentication-operator-69f744f599-vs629\" (UID: \"3a7956db-4568-4aa9-806f-b6eb458a1562\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148596 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148612 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-audit-dir\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148627 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148642 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-client-ca\") pod \"controller-manager-879f6c89f-fkwvm\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148660 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r66kt\" (UniqueName: \"kubernetes.io/projected/da3cda05-4158-4df2-93ab-6526af2232ba-kube-api-access-r66kt\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148677 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxdf7\" (UniqueName: \"kubernetes.io/projected/3a7956db-4568-4aa9-806f-b6eb458a1562-kube-api-access-sxdf7\") pod \"authentication-operator-69f744f599-vs629\" (UID: \"3a7956db-4568-4aa9-806f-b6eb458a1562\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148694 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148710 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148730 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148746 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148760 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/da3cda05-4158-4df2-93ab-6526af2232ba-etcd-client\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148775 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a7956db-4568-4aa9-806f-b6eb458a1562-service-ca-bundle\") pod \"authentication-operator-69f744f599-vs629\" (UID: \"3a7956db-4568-4aa9-806f-b6eb458a1562\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148794 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/da3cda05-4158-4df2-93ab-6526af2232ba-etcd-serving-ca\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148812 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148831 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b9d087d-2335-4629-8cc8-6eeb320aa797-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-98tzb\" (UID: \"7b9d087d-2335-4629-8cc8-6eeb320aa797\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98tzb" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148849 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed37686-689b-46e5-8069-0e4de3519afb-config\") pod \"machine-api-operator-5694c8668f-vn2qs\" (UID: \"9ed37686-689b-46e5-8069-0e4de3519afb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148872 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4k7k\" (UniqueName: \"kubernetes.io/projected/7b9d087d-2335-4629-8cc8-6eeb320aa797-kube-api-access-r4k7k\") pod \"cluster-samples-operator-665b6dd947-98tzb\" (UID: \"7b9d087d-2335-4629-8cc8-6eeb320aa797\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98tzb" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148888 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/da3cda05-4158-4df2-93ab-6526af2232ba-node-pullsecrets\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148904 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148921 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9ed37686-689b-46e5-8069-0e4de3519afb-images\") pod \"machine-api-operator-5694c8668f-vn2qs\" (UID: \"9ed37686-689b-46e5-8069-0e4de3519afb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148942 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q55c\" (UniqueName: \"kubernetes.io/projected/d08c2681-edcf-4634-aede-63eb081e72a0-kube-api-access-7q55c\") pod \"controller-manager-879f6c89f-fkwvm\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/da3cda05-4158-4df2-93ab-6526af2232ba-audit-dir\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148978 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fkwvm\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149000 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-config\") pod \"controller-manager-879f6c89f-fkwvm\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149019 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149037 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/da3cda05-4158-4df2-93ab-6526af2232ba-encryption-config\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149059 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x6bgt\" (UID: \"cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149089 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149103 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da3cda05-4158-4df2-93ab-6526af2232ba-config\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149119 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-etcd-client\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149134 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7956db-4568-4aa9-806f-b6eb458a1562-config\") pod \"authentication-operator-69f744f599-vs629\" (UID: \"3a7956db-4568-4aa9-806f-b6eb458a1562\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149152 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhhh9\" (UniqueName: \"kubernetes.io/projected/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-kube-api-access-jhhh9\") pod \"route-controller-manager-6576b87f9c-k4lgm\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149168 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wxcq\" (UniqueName: \"kubernetes.io/projected/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-kube-api-access-4wxcq\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149187 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-config\") pod \"route-controller-manager-6576b87f9c-k4lgm\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149202 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-client-ca\") pod \"route-controller-manager-6576b87f9c-k4lgm\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149217 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a7956db-4568-4aa9-806f-b6eb458a1562-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vs629\" (UID: \"3a7956db-4568-4aa9-806f-b6eb458a1562\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149233 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-serving-cert\") pod \"route-controller-manager-6576b87f9c-k4lgm\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149248 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-serving-cert\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149264 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da3cda05-4158-4df2-93ab-6526af2232ba-serving-cert\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149281 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9ed37686-689b-46e5-8069-0e4de3519afb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vn2qs\" (UID: \"9ed37686-689b-46e5-8069-0e4de3519afb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.148796 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149399 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.149417 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/da3cda05-4158-4df2-93ab-6526af2232ba-image-import-ca\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.159621 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.161501 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.161942 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.162813 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.163079 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.163091 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.163707 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.163938 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.172487 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.172858 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.174601 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.176373 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.181942 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.182053 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.182944 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.183045 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.183338 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.184019 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.186516 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.187142 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.187525 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.187563 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.188092 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.188220 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.192281 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.192751 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.194722 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.196247 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-87hn6"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.197021 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-87hn6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.197681 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.197788 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.197996 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bj2zs"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.199109 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.199308 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gxndz"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.199873 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.202017 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.202220 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.203739 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.203934 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.204165 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.204287 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.208645 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.208680 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.209307 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.210752 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.212116 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.204402 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.213703 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.220670 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.221545 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.222153 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.223838 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.225642 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.225816 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.228638 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5xpcm"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.229515 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.231063 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.231356 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.237552 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.241640 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.242480 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.242926 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.243260 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.243768 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.244445 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.274649 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-serving-cert\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.274713 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-etcd-service-ca\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.274748 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-etcd-client\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.274793 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf49ca41-7e40-4d0e-860b-e9fcc6fba998-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-drwq6\" (UID: \"bf49ca41-7e40-4d0e-860b-e9fcc6fba998\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.274858 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-config\") pod \"controller-manager-879f6c89f-fkwvm\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.274885 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fkwvm\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.274915 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.274951 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/da3cda05-4158-4df2-93ab-6526af2232ba-encryption-config\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.274996 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x6bgt\" (UID: \"cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275028 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06433b0e-798d-4bc9-90da-1a92bbc86acd-trusted-ca\") pod \"console-operator-58897d9998-jkgd6\" (UID: \"06433b0e-798d-4bc9-90da-1a92bbc86acd\") " pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275057 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7956db-4568-4aa9-806f-b6eb458a1562-config\") pod \"authentication-operator-69f744f599-vs629\" (UID: \"3a7956db-4568-4aa9-806f-b6eb458a1562\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275091 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275113 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da3cda05-4158-4df2-93ab-6526af2232ba-config\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275144 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-etcd-client\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275201 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wxcq\" (UniqueName: \"kubernetes.io/projected/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-kube-api-access-4wxcq\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275234 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-config\") pod \"route-controller-manager-6576b87f9c-k4lgm\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275269 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhhh9\" (UniqueName: \"kubernetes.io/projected/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-kube-api-access-jhhh9\") pod \"route-controller-manager-6576b87f9c-k4lgm\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275308 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a7956db-4568-4aa9-806f-b6eb458a1562-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vs629\" (UID: \"3a7956db-4568-4aa9-806f-b6eb458a1562\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275347 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-config\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275387 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-client-ca\") pod \"route-controller-manager-6576b87f9c-k4lgm\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275423 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bgs5\" (UniqueName: \"kubernetes.io/projected/bf49ca41-7e40-4d0e-860b-e9fcc6fba998-kube-api-access-8bgs5\") pod \"openshift-controller-manager-operator-756b6f6bc6-drwq6\" (UID: \"bf49ca41-7e40-4d0e-860b-e9fcc6fba998\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275453 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-serving-cert\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275483 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-serving-cert\") pod \"route-controller-manager-6576b87f9c-k4lgm\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275512 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vvvc\" (UniqueName: \"kubernetes.io/projected/06433b0e-798d-4bc9-90da-1a92bbc86acd-kube-api-access-7vvvc\") pod \"console-operator-58897d9998-jkgd6\" (UID: \"06433b0e-798d-4bc9-90da-1a92bbc86acd\") " pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275542 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275598 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/da3cda05-4158-4df2-93ab-6526af2232ba-image-import-ca\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275634 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da3cda05-4158-4df2-93ab-6526af2232ba-serving-cert\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275667 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9ed37686-689b-46e5-8069-0e4de3519afb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vn2qs\" (UID: \"9ed37686-689b-46e5-8069-0e4de3519afb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275699 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06433b0e-798d-4bc9-90da-1a92bbc86acd-serving-cert\") pod \"console-operator-58897d9998-jkgd6\" (UID: \"06433b0e-798d-4bc9-90da-1a92bbc86acd\") " pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275737 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f19cb7c8-32ec-458e-b342-c28b00bcca91-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-f88dh\" (UID: \"f19cb7c8-32ec-458e-b342-c28b00bcca91\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275781 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf49ca41-7e40-4d0e-860b-e9fcc6fba998-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-drwq6\" (UID: \"bf49ca41-7e40-4d0e-860b-e9fcc6fba998\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275834 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275864 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-encryption-config\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275900 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-audit-policies\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275935 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-audit-policies\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.275969 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2fkh\" (UniqueName: \"kubernetes.io/projected/577e3549-41e2-4af0-9b37-807d419dfbb9-kube-api-access-v2fkh\") pod \"downloads-7954f5f757-tphpp\" (UID: \"577e3549-41e2-4af0-9b37-807d419dfbb9\") " pod="openshift-console/downloads-7954f5f757-tphpp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276002 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276032 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmz24\" (UniqueName: \"kubernetes.io/projected/cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf-kube-api-access-nmz24\") pod \"openshift-apiserver-operator-796bbdcf4f-x6bgt\" (UID: \"cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276063 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d08c2681-edcf-4634-aede-63eb081e72a0-serving-cert\") pod \"controller-manager-879f6c89f-fkwvm\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276094 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqqg9\" (UniqueName: \"kubernetes.io/projected/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-kube-api-access-mqqg9\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276140 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x6bgt\" (UID: \"cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276175 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l57p\" (UniqueName: \"kubernetes.io/projected/9ed37686-689b-46e5-8069-0e4de3519afb-kube-api-access-5l57p\") pod \"machine-api-operator-5694c8668f-vn2qs\" (UID: \"9ed37686-689b-46e5-8069-0e4de3519afb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276212 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f19cb7c8-32ec-458e-b342-c28b00bcca91-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-f88dh\" (UID: \"f19cb7c8-32ec-458e-b342-c28b00bcca91\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276242 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-etcd-ca\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276275 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-audit-dir\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276306 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/da3cda05-4158-4df2-93ab-6526af2232ba-audit\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276333 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da3cda05-4158-4df2-93ab-6526af2232ba-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276364 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a7956db-4568-4aa9-806f-b6eb458a1562-serving-cert\") pod \"authentication-operator-69f744f599-vs629\" (UID: \"3a7956db-4568-4aa9-806f-b6eb458a1562\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276395 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276422 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-audit-dir\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276449 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276483 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-client-ca\") pod \"controller-manager-879f6c89f-fkwvm\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276507 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r66kt\" (UniqueName: \"kubernetes.io/projected/da3cda05-4158-4df2-93ab-6526af2232ba-kube-api-access-r66kt\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276537 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276590 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276627 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxdf7\" (UniqueName: \"kubernetes.io/projected/3a7956db-4568-4aa9-806f-b6eb458a1562-kube-api-access-sxdf7\") pod \"authentication-operator-69f744f599-vs629\" (UID: \"3a7956db-4568-4aa9-806f-b6eb458a1562\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276664 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276694 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/da3cda05-4158-4df2-93ab-6526af2232ba-etcd-client\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276727 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276756 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a7956db-4568-4aa9-806f-b6eb458a1562-service-ca-bundle\") pod \"authentication-operator-69f744f599-vs629\" (UID: \"3a7956db-4568-4aa9-806f-b6eb458a1562\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276790 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmqrt\" (UniqueName: \"kubernetes.io/projected/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-kube-api-access-fmqrt\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276822 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/da3cda05-4158-4df2-93ab-6526af2232ba-etcd-serving-ca\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276850 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n49v\" (UniqueName: \"kubernetes.io/projected/f19cb7c8-32ec-458e-b342-c28b00bcca91-kube-api-access-4n49v\") pod \"cluster-image-registry-operator-dc59b4c8b-f88dh\" (UID: \"f19cb7c8-32ec-458e-b342-c28b00bcca91\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b9d087d-2335-4629-8cc8-6eeb320aa797-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-98tzb\" (UID: \"7b9d087d-2335-4629-8cc8-6eeb320aa797\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98tzb" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276941 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed37686-689b-46e5-8069-0e4de3519afb-config\") pod \"machine-api-operator-5694c8668f-vn2qs\" (UID: \"9ed37686-689b-46e5-8069-0e4de3519afb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.276969 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.277020 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4k7k\" (UniqueName: \"kubernetes.io/projected/7b9d087d-2335-4629-8cc8-6eeb320aa797-kube-api-access-r4k7k\") pod \"cluster-samples-operator-665b6dd947-98tzb\" (UID: \"7b9d087d-2335-4629-8cc8-6eeb320aa797\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98tzb" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.277048 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/da3cda05-4158-4df2-93ab-6526af2232ba-node-pullsecrets\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.277078 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.277107 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f19cb7c8-32ec-458e-b342-c28b00bcca91-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-f88dh\" (UID: \"f19cb7c8-32ec-458e-b342-c28b00bcca91\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.277137 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q55c\" (UniqueName: \"kubernetes.io/projected/d08c2681-edcf-4634-aede-63eb081e72a0-kube-api-access-7q55c\") pod \"controller-manager-879f6c89f-fkwvm\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.277167 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/da3cda05-4158-4df2-93ab-6526af2232ba-audit-dir\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.277193 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9ed37686-689b-46e5-8069-0e4de3519afb-images\") pod \"machine-api-operator-5694c8668f-vn2qs\" (UID: \"9ed37686-689b-46e5-8069-0e4de3519afb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.277225 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06433b0e-798d-4bc9-90da-1a92bbc86acd-config\") pod \"console-operator-58897d9998-jkgd6\" (UID: \"06433b0e-798d-4bc9-90da-1a92bbc86acd\") " pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.279660 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rggw8"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.280261 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da3cda05-4158-4df2-93ab-6526af2232ba-config\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.280800 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x6bgt\" (UID: \"cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.281033 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-audit-dir\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.281171 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rggw8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.281532 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/da3cda05-4158-4df2-93ab-6526af2232ba-audit\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.282492 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da3cda05-4158-4df2-93ab-6526af2232ba-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.282968 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.283398 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.288425 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.289305 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d08c2681-edcf-4634-aede-63eb081e72a0-serving-cert\") pod \"controller-manager-879f6c89f-fkwvm\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.290271 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ed37686-689b-46e5-8069-0e4de3519afb-config\") pod \"machine-api-operator-5694c8668f-vn2qs\" (UID: \"9ed37686-689b-46e5-8069-0e4de3519afb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.290951 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-serving-cert\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.291170 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.291944 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.292644 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.294042 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-serving-cert\") pod \"route-controller-manager-6576b87f9c-k4lgm\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.298721 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.302634 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-audit-dir\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.303629 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da3cda05-4158-4df2-93ab-6526af2232ba-serving-cert\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.304096 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a7956db-4568-4aa9-806f-b6eb458a1562-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vs629\" (UID: \"3a7956db-4568-4aa9-806f-b6eb458a1562\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.305107 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fkwvm\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.305848 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.306148 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.306434 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-config\") pod \"controller-manager-879f6c89f-fkwvm\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.307476 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-client-ca\") pod \"controller-manager-879f6c89f-fkwvm\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.307720 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/da3cda05-4158-4df2-93ab-6526af2232ba-node-pullsecrets\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.312359 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-etcd-client\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.312834 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.313172 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.313309 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/da3cda05-4158-4df2-93ab-6526af2232ba-audit-dir\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.313481 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.316438 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-config\") pod \"route-controller-manager-6576b87f9c-k4lgm\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.316617 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.316681 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/da3cda05-4158-4df2-93ab-6526af2232ba-etcd-client\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.317075 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/da3cda05-4158-4df2-93ab-6526af2232ba-etcd-serving-ca\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.317204 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a7956db-4568-4aa9-806f-b6eb458a1562-config\") pod \"authentication-operator-69f744f599-vs629\" (UID: \"3a7956db-4568-4aa9-806f-b6eb458a1562\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.317764 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a7956db-4568-4aa9-806f-b6eb458a1562-service-ca-bundle\") pod \"authentication-operator-69f744f599-vs629\" (UID: \"3a7956db-4568-4aa9-806f-b6eb458a1562\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.318100 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7b9d087d-2335-4629-8cc8-6eeb320aa797-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-98tzb\" (UID: \"7b9d087d-2335-4629-8cc8-6eeb320aa797\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98tzb" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.318601 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.318672 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/da3cda05-4158-4df2-93ab-6526af2232ba-image-import-ca\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.318748 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-encryption-config\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.318871 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.319283 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.319308 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9ed37686-689b-46e5-8069-0e4de3519afb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vn2qs\" (UID: \"9ed37686-689b-46e5-8069-0e4de3519afb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.319322 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-audit-policies\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.319696 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.319785 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.320025 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a7956db-4568-4aa9-806f-b6eb458a1562-serving-cert\") pod \"authentication-operator-69f744f599-vs629\" (UID: \"3a7956db-4568-4aa9-806f-b6eb458a1562\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.320036 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9ed37686-689b-46e5-8069-0e4de3519afb-images\") pod \"machine-api-operator-5694c8668f-vn2qs\" (UID: \"9ed37686-689b-46e5-8069-0e4de3519afb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.320238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x6bgt\" (UID: \"cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.320781 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.321925 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-48nqf"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.322936 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.323743 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.324115 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-48nqf" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.324382 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.324890 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-audit-policies\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.327066 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.327152 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/da3cda05-4158-4df2-93ab-6526af2232ba-encryption-config\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.332597 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.333096 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-client-ca\") pod \"route-controller-manager-6576b87f9c-k4lgm\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.335373 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.336630 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.348235 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vg9j4"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.349037 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.349454 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.360027 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.374358 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.375955 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9kpwz"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.376122 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.376367 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-z98ht"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.376832 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.377162 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fkwvm"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.377187 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v6mt8"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.377204 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.377326 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9kpwz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.377690 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z98ht" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.377775 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vn2qs"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.377800 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jkgd6"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.377815 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.377998 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.378137 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98tzb"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.378157 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.378207 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.378228 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.378472 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-dpgh5"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.379033 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dpgh5" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.379636 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f19cb7c8-32ec-458e-b342-c28b00bcca91-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-f88dh\" (UID: \"f19cb7c8-32ec-458e-b342-c28b00bcca91\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.379662 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-etcd-ca\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.379699 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmqrt\" (UniqueName: \"kubernetes.io/projected/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-kube-api-access-fmqrt\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.379715 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n49v\" (UniqueName: \"kubernetes.io/projected/f19cb7c8-32ec-458e-b342-c28b00bcca91-kube-api-access-4n49v\") pod \"cluster-image-registry-operator-dc59b4c8b-f88dh\" (UID: \"f19cb7c8-32ec-458e-b342-c28b00bcca91\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.379783 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f19cb7c8-32ec-458e-b342-c28b00bcca91-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-f88dh\" (UID: \"f19cb7c8-32ec-458e-b342-c28b00bcca91\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.379807 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06433b0e-798d-4bc9-90da-1a92bbc86acd-config\") pod \"console-operator-58897d9998-jkgd6\" (UID: \"06433b0e-798d-4bc9-90da-1a92bbc86acd\") " pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.379823 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-serving-cert\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.379859 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-etcd-service-ca\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.379875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-etcd-client\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.379891 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf49ca41-7e40-4d0e-860b-e9fcc6fba998-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-drwq6\" (UID: \"bf49ca41-7e40-4d0e-860b-e9fcc6fba998\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.379933 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06433b0e-798d-4bc9-90da-1a92bbc86acd-trusted-ca\") pod \"console-operator-58897d9998-jkgd6\" (UID: \"06433b0e-798d-4bc9-90da-1a92bbc86acd\") " pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.379963 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-config\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.379981 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bgs5\" (UniqueName: \"kubernetes.io/projected/bf49ca41-7e40-4d0e-860b-e9fcc6fba998-kube-api-access-8bgs5\") pod \"openshift-controller-manager-operator-756b6f6bc6-drwq6\" (UID: \"bf49ca41-7e40-4d0e-860b-e9fcc6fba998\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.380019 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vvvc\" (UniqueName: \"kubernetes.io/projected/06433b0e-798d-4bc9-90da-1a92bbc86acd-kube-api-access-7vvvc\") pod \"console-operator-58897d9998-jkgd6\" (UID: \"06433b0e-798d-4bc9-90da-1a92bbc86acd\") " pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.380037 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06433b0e-798d-4bc9-90da-1a92bbc86acd-serving-cert\") pod \"console-operator-58897d9998-jkgd6\" (UID: \"06433b0e-798d-4bc9-90da-1a92bbc86acd\") " pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.380106 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f19cb7c8-32ec-458e-b342-c28b00bcca91-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-f88dh\" (UID: \"f19cb7c8-32ec-458e-b342-c28b00bcca91\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.380183 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf49ca41-7e40-4d0e-860b-e9fcc6fba998-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-drwq6\" (UID: \"bf49ca41-7e40-4d0e-860b-e9fcc6fba998\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.380695 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-etcd-ca\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.381301 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf49ca41-7e40-4d0e-860b-e9fcc6fba998-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-drwq6\" (UID: \"bf49ca41-7e40-4d0e-860b-e9fcc6fba998\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.381623 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.382879 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f19cb7c8-32ec-458e-b342-c28b00bcca91-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-f88dh\" (UID: \"f19cb7c8-32ec-458e-b342-c28b00bcca91\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.382995 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-94sbp"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.383770 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06433b0e-798d-4bc9-90da-1a92bbc86acd-trusted-ca\") pod \"console-operator-58897d9998-jkgd6\" (UID: \"06433b0e-798d-4bc9-90da-1a92bbc86acd\") " pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.385250 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-config\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.387623 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-etcd-service-ca\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.387710 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-etcd-client\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.388456 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sjkqt"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.389255 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06433b0e-798d-4bc9-90da-1a92bbc86acd-config\") pod \"console-operator-58897d9998-jkgd6\" (UID: \"06433b0e-798d-4bc9-90da-1a92bbc86acd\") " pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.389834 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f19cb7c8-32ec-458e-b342-c28b00bcca91-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-f88dh\" (UID: \"f19cb7c8-32ec-458e-b342-c28b00bcca91\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.390186 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.390367 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06433b0e-798d-4bc9-90da-1a92bbc86acd-serving-cert\") pod \"console-operator-58897d9998-jkgd6\" (UID: \"06433b0e-798d-4bc9-90da-1a92bbc86acd\") " pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.390374 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gxndz"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.391329 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-serving-cert\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.391996 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-48nqf"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.393607 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vs629"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.395695 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf49ca41-7e40-4d0e-860b-e9fcc6fba998-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-drwq6\" (UID: \"bf49ca41-7e40-4d0e-860b-e9fcc6fba998\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.395939 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.397053 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rggw8"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.405606 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.407306 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dpgh5"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.410382 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-87hn6"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.410416 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.412327 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.414409 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.417591 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.418825 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tphpp"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.421203 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.422415 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.423512 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wht2v"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.424157 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wht2v" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.426047 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-zz6xx"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.427178 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zz6xx" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.427379 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.428032 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.428868 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.430397 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9kpwz"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.435664 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.435712 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.437432 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bj2zs"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.438873 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.440158 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-z98ht"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.445779 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.447802 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.448908 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.449892 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.452339 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zz6xx"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.455892 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vg9j4"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.457802 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ch5bd"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.459496 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ch5bd"] Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.459618 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.468047 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.488310 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.508460 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.528155 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.549488 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.568391 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.590126 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.616914 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.628763 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.649186 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.668931 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.688455 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.709779 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.729268 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.748310 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.767998 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.789741 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.807655 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.848978 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.858510 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.858643 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.868193 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.887862 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.909122 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.928470 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.948972 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.969086 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 04:45:42 crc kubenswrapper[4832]: I0131 04:45:42.989036 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.008186 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.029059 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.049396 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.092486 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxdf7\" (UniqueName: \"kubernetes.io/projected/3a7956db-4568-4aa9-806f-b6eb458a1562-kube-api-access-sxdf7\") pod \"authentication-operator-69f744f599-vs629\" (UID: \"3a7956db-4568-4aa9-806f-b6eb458a1562\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.111432 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqqg9\" (UniqueName: \"kubernetes.io/projected/c65562f0-ce98-4b36-aa5a-bbac4b1515fb-kube-api-access-mqqg9\") pod \"apiserver-7bbb656c7d-6zwmk\" (UID: \"c65562f0-ce98-4b36-aa5a-bbac4b1515fb\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.130880 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.132061 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.137873 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l57p\" (UniqueName: \"kubernetes.io/projected/9ed37686-689b-46e5-8069-0e4de3519afb-kube-api-access-5l57p\") pod \"machine-api-operator-5694c8668f-vn2qs\" (UID: \"9ed37686-689b-46e5-8069-0e4de3519afb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.151298 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.170303 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.182811 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.189241 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.209235 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.229424 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.282545 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r66kt\" (UniqueName: \"kubernetes.io/projected/da3cda05-4158-4df2-93ab-6526af2232ba-kube-api-access-r66kt\") pod \"apiserver-76f77b778f-v6mt8\" (UID: \"da3cda05-4158-4df2-93ab-6526af2232ba\") " pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.306827 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wxcq\" (UniqueName: \"kubernetes.io/projected/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-kube-api-access-4wxcq\") pod \"oauth-openshift-558db77b4-94sbp\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.326305 4832 request.go:700] Waited for 1.018374552s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.326760 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhhh9\" (UniqueName: \"kubernetes.io/projected/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-kube-api-access-jhhh9\") pod \"route-controller-manager-6576b87f9c-k4lgm\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.328204 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.333717 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4k7k\" (UniqueName: \"kubernetes.io/projected/7b9d087d-2335-4629-8cc8-6eeb320aa797-kube-api-access-r4k7k\") pod \"cluster-samples-operator-665b6dd947-98tzb\" (UID: \"7b9d087d-2335-4629-8cc8-6eeb320aa797\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98tzb" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.340058 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.349393 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.387283 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.389744 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2fkh\" (UniqueName: \"kubernetes.io/projected/577e3549-41e2-4af0-9b37-807d419dfbb9-kube-api-access-v2fkh\") pod \"downloads-7954f5f757-tphpp\" (UID: \"577e3549-41e2-4af0-9b37-807d419dfbb9\") " pod="openshift-console/downloads-7954f5f757-tphpp" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.405809 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.413486 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q55c\" (UniqueName: \"kubernetes.io/projected/d08c2681-edcf-4634-aede-63eb081e72a0-kube-api-access-7q55c\") pod \"controller-manager-879f6c89f-fkwvm\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.429714 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk"] Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.430081 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmz24\" (UniqueName: \"kubernetes.io/projected/cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf-kube-api-access-nmz24\") pod \"openshift-apiserver-operator-796bbdcf4f-x6bgt\" (UID: \"cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.431934 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.439951 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98tzb" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.449085 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.468055 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.489173 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.491766 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vs629"] Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.508328 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.529478 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.546476 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v6mt8"] Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.548333 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 04:45:43 crc kubenswrapper[4832]: W0131 04:45:43.561166 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda3cda05_4158_4df2_93ab_6526af2232ba.slice/crio-4b9011dbd24f257d14f17a7fe4cd829b4bd1fda31d5a4c04fb39f46565e8e6ad WatchSource:0}: Error finding container 4b9011dbd24f257d14f17a7fe4cd829b4bd1fda31d5a4c04fb39f46565e8e6ad: Status 404 returned error can't find the container with id 4b9011dbd24f257d14f17a7fe4cd829b4bd1fda31d5a4c04fb39f46565e8e6ad Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.568896 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.587447 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tphpp" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.589493 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.609145 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.624488 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.628423 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.634520 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vn2qs"] Jan 31 04:45:43 crc kubenswrapper[4832]: W0131 04:45:43.644065 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ed37686_689b_46e5_8069_0e4de3519afb.slice/crio-891aa89706ab38675be37e3a78d36e29ace9d1aa4b015171c57c2f676360a8c0 WatchSource:0}: Error finding container 891aa89706ab38675be37e3a78d36e29ace9d1aa4b015171c57c2f676360a8c0: Status 404 returned error can't find the container with id 891aa89706ab38675be37e3a78d36e29ace9d1aa4b015171c57c2f676360a8c0 Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.650584 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.666501 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.667778 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.674232 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-94sbp"] Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.687778 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.696230 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" event={"ID":"9ed37686-689b-46e5-8069-0e4de3519afb","Type":"ContainerStarted","Data":"891aa89706ab38675be37e3a78d36e29ace9d1aa4b015171c57c2f676360a8c0"} Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.696699 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt" Jan 31 04:45:43 crc kubenswrapper[4832]: W0131 04:45:43.700279 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1bfdfa4_81a2_460c_9eb5_4c6723f61ae0.slice/crio-23de51ed6aec4c15fb52121129457bbebf7f4dcf36f0b25afdbf46e9426fd226 WatchSource:0}: Error finding container 23de51ed6aec4c15fb52121129457bbebf7f4dcf36f0b25afdbf46e9426fd226: Status 404 returned error can't find the container with id 23de51ed6aec4c15fb52121129457bbebf7f4dcf36f0b25afdbf46e9426fd226 Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.702976 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98tzb"] Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.708517 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.711286 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" event={"ID":"da3cda05-4158-4df2-93ab-6526af2232ba","Type":"ContainerStarted","Data":"4b9011dbd24f257d14f17a7fe4cd829b4bd1fda31d5a4c04fb39f46565e8e6ad"} Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.712513 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" event={"ID":"c65562f0-ce98-4b36-aa5a-bbac4b1515fb","Type":"ContainerStarted","Data":"e075bc9a0dcfdffb7f2d6c63d148db35190a6b7fb869ceaa4f1cf361ee566572"} Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.714116 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" event={"ID":"3a7956db-4568-4aa9-806f-b6eb458a1562","Type":"ContainerStarted","Data":"f64cf9bb75753426385f86d9d33589888e8c154f3c1421d9a20b7df5f2147782"} Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.729392 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.749050 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.775994 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.789111 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.808958 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tphpp"] Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.810976 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.828447 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.848733 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.858753 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.858828 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.869507 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.890285 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.903246 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm"] Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.909462 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.925464 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fkwvm"] Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.929546 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.949088 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.969044 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.979315 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt"] Jan 31 04:45:43 crc kubenswrapper[4832]: I0131 04:45:43.988444 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.007932 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.028742 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.047845 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.071608 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.089004 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.109398 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.128185 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.149406 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.186578 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmqrt\" (UniqueName: \"kubernetes.io/projected/5f6f9f13-7f9c-4ffa-b706-22c973d7c7af-kube-api-access-fmqrt\") pod \"etcd-operator-b45778765-gxndz\" (UID: \"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.202782 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n49v\" (UniqueName: \"kubernetes.io/projected/f19cb7c8-32ec-458e-b342-c28b00bcca91-kube-api-access-4n49v\") pod \"cluster-image-registry-operator-dc59b4c8b-f88dh\" (UID: \"f19cb7c8-32ec-458e-b342-c28b00bcca91\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.222647 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vvvc\" (UniqueName: \"kubernetes.io/projected/06433b0e-798d-4bc9-90da-1a92bbc86acd-kube-api-access-7vvvc\") pod \"console-operator-58897d9998-jkgd6\" (UID: \"06433b0e-798d-4bc9-90da-1a92bbc86acd\") " pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.242211 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bgs5\" (UniqueName: \"kubernetes.io/projected/bf49ca41-7e40-4d0e-860b-e9fcc6fba998-kube-api-access-8bgs5\") pod \"openshift-controller-manager-operator-756b6f6bc6-drwq6\" (UID: \"bf49ca41-7e40-4d0e-860b-e9fcc6fba998\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.245749 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.263045 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f19cb7c8-32ec-458e-b342-c28b00bcca91-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-f88dh\" (UID: \"f19cb7c8-32ec-458e-b342-c28b00bcca91\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.288171 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.307934 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.327694 4832 request.go:700] Waited for 1.903183286s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.329660 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.349447 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.371076 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.388525 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.408975 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.429087 4832 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.430502 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gxndz"] Jan 31 04:45:44 crc kubenswrapper[4832]: W0131 04:45:44.439899 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f6f9f13_7f9c_4ffa_b706_22c973d7c7af.slice/crio-e2131faed996a6f4911218182b20248abae33d7a0d9e8fec893bbc77a779615c WatchSource:0}: Error finding container e2131faed996a6f4911218182b20248abae33d7a0d9e8fec893bbc77a779615c: Status 404 returned error can't find the container with id e2131faed996a6f4911218182b20248abae33d7a0d9e8fec893bbc77a779615c Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.448318 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.488953 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.494057 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.504841 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508131 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508222 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4994f771-6cff-4fcf-819e-9f3fcba71534-stats-auth\") pod \"router-default-5444994796-5xpcm\" (UID: \"4994f771-6cff-4fcf-819e-9f3fcba71534\") " pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508249 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5645b412-ae27-4065-a722-a0823e0ade35-serving-cert\") pod \"openshift-config-operator-7777fb866f-m7pqg\" (UID: \"5645b412-ae27-4065-a722-a0823e0ade35\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508280 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1956cdaf-8b02-4782-b9cf-2329442f4236-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qfrnd\" (UID: \"1956cdaf-8b02-4782-b9cf-2329442f4236\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508301 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4994f771-6cff-4fcf-819e-9f3fcba71534-default-certificate\") pod \"router-default-5444994796-5xpcm\" (UID: \"4994f771-6cff-4fcf-819e-9f3fcba71534\") " pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508328 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5645b412-ae27-4065-a722-a0823e0ade35-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m7pqg\" (UID: \"5645b412-ae27-4065-a722-a0823e0ade35\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508363 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4994f771-6cff-4fcf-819e-9f3fcba71534-service-ca-bundle\") pod \"router-default-5444994796-5xpcm\" (UID: \"4994f771-6cff-4fcf-819e-9f3fcba71534\") " pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508412 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-service-ca\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508431 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1956cdaf-8b02-4782-b9cf-2329442f4236-config\") pod \"kube-apiserver-operator-766d6c64bb-qfrnd\" (UID: \"1956cdaf-8b02-4782-b9cf-2329442f4236\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508479 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1956cdaf-8b02-4782-b9cf-2329442f4236-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qfrnd\" (UID: \"1956cdaf-8b02-4782-b9cf-2329442f4236\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508501 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b0bbb98c-8414-4ec1-b718-02f5658451dc-metrics-tls\") pod \"dns-operator-744455d44c-87hn6\" (UID: \"b0bbb98c-8414-4ec1-b718-02f5658451dc\") " pod="openshift-dns-operator/dns-operator-744455d44c-87hn6" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508520 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-bound-sa-token\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508594 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4nx8\" (UniqueName: \"kubernetes.io/projected/70780eee-9367-4fce-923e-fc7b8ec0e88a-kube-api-access-v4nx8\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508643 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-trusted-ca\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508662 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4994f771-6cff-4fcf-819e-9f3fcba71534-metrics-certs\") pod \"router-default-5444994796-5xpcm\" (UID: \"4994f771-6cff-4fcf-819e-9f3fcba71534\") " pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508758 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm26q\" (UniqueName: \"kubernetes.io/projected/b0bbb98c-8414-4ec1-b718-02f5658451dc-kube-api-access-pm26q\") pod \"dns-operator-744455d44c-87hn6\" (UID: \"b0bbb98c-8414-4ec1-b718-02f5658451dc\") " pod="openshift-dns-operator/dns-operator-744455d44c-87hn6" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508780 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-trusted-ca-bundle\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508820 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508844 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj2p6\" (UniqueName: \"kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-kube-api-access-wj2p6\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508903 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b780f08-8776-4bb1-98c4-c119a0d109df-metrics-tls\") pod \"ingress-operator-5b745b69d9-q7k6g\" (UID: \"3b780f08-8776-4bb1-98c4-c119a0d109df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508947 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.508969 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-oauth-config\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.509023 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b780f08-8776-4bb1-98c4-c119a0d109df-trusted-ca\") pod \"ingress-operator-5b745b69d9-q7k6g\" (UID: \"3b780f08-8776-4bb1-98c4-c119a0d109df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.509044 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6201a1f3-1e3f-43fa-9299-14f4d7d6ac02-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v8m5p\" (UID: \"6201a1f3-1e3f-43fa-9299-14f4d7d6ac02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.509064 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3b780f08-8776-4bb1-98c4-c119a0d109df-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q7k6g\" (UID: \"3b780f08-8776-4bb1-98c4-c119a0d109df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.509109 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-config\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.509131 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvx9f\" (UniqueName: \"kubernetes.io/projected/4994f771-6cff-4fcf-819e-9f3fcba71534-kube-api-access-wvx9f\") pod \"router-default-5444994796-5xpcm\" (UID: \"4994f771-6cff-4fcf-819e-9f3fcba71534\") " pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.509207 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-registry-certificates\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.509361 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-registry-tls\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.509383 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6201a1f3-1e3f-43fa-9299-14f4d7d6ac02-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v8m5p\" (UID: \"6201a1f3-1e3f-43fa-9299-14f4d7d6ac02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.509410 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-serving-cert\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.509430 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-oauth-serving-cert\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.509582 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6201a1f3-1e3f-43fa-9299-14f4d7d6ac02-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v8m5p\" (UID: \"6201a1f3-1e3f-43fa-9299-14f4d7d6ac02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.509649 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gwxc\" (UniqueName: \"kubernetes.io/projected/3b780f08-8776-4bb1-98c4-c119a0d109df-kube-api-access-8gwxc\") pod \"ingress-operator-5b745b69d9-q7k6g\" (UID: \"3b780f08-8776-4bb1-98c4-c119a0d109df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.509684 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp8r8\" (UniqueName: \"kubernetes.io/projected/5645b412-ae27-4065-a722-a0823e0ade35-kube-api-access-tp8r8\") pod \"openshift-config-operator-7777fb866f-m7pqg\" (UID: \"5645b412-ae27-4065-a722-a0823e0ade35\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" Jan 31 04:45:44 crc kubenswrapper[4832]: E0131 04:45:44.511591 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:45.011553704 +0000 UTC m=+153.960375399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.518108 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.528738 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.529435 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.550796 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.570156 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.591609 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.610247 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.610406 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp8r8\" (UniqueName: \"kubernetes.io/projected/5645b412-ae27-4065-a722-a0823e0ade35-kube-api-access-tp8r8\") pod \"openshift-config-operator-7777fb866f-m7pqg\" (UID: \"5645b412-ae27-4065-a722-a0823e0ade35\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.610433 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/843a67ad-547e-4b69-9cce-e193d0da6f29-serving-cert\") pod \"service-ca-operator-777779d784-tb2kx\" (UID: \"843a67ad-547e-4b69-9cce-e193d0da6f29\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.610449 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9kwc\" (UniqueName: \"kubernetes.io/projected/db0f0a11-f2c4-4358-8a5a-f6f992f0efc7-kube-api-access-h9kwc\") pod \"dns-default-zz6xx\" (UID: \"db0f0a11-f2c4-4358-8a5a-f6f992f0efc7\") " pod="openshift-dns/dns-default-zz6xx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.610475 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwf7k\" (UniqueName: \"kubernetes.io/projected/648f7a8d-616d-4519-9131-aacc3768b09e-kube-api-access-gwf7k\") pod \"machine-config-server-wht2v\" (UID: \"648f7a8d-616d-4519-9131-aacc3768b09e\") " pod="openshift-machine-config-operator/machine-config-server-wht2v" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.610500 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4994f771-6cff-4fcf-819e-9f3fcba71534-stats-auth\") pod \"router-default-5444994796-5xpcm\" (UID: \"4994f771-6cff-4fcf-819e-9f3fcba71534\") " pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.610522 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5645b412-ae27-4065-a722-a0823e0ade35-serving-cert\") pod \"openshift-config-operator-7777fb866f-m7pqg\" (UID: \"5645b412-ae27-4065-a722-a0823e0ade35\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.610558 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4994f771-6cff-4fcf-819e-9f3fcba71534-default-certificate\") pod \"router-default-5444994796-5xpcm\" (UID: \"4994f771-6cff-4fcf-819e-9f3fcba71534\") " pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.610663 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rwnl\" (UniqueName: \"kubernetes.io/projected/9b727897-88a9-4e96-9276-6c1df4780622-kube-api-access-8rwnl\") pod \"service-ca-9c57cc56f-48nqf\" (UID: \"9b727897-88a9-4e96-9276-6c1df4780622\") " pod="openshift-service-ca/service-ca-9c57cc56f-48nqf" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.610681 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4994f771-6cff-4fcf-819e-9f3fcba71534-service-ca-bundle\") pod \"router-default-5444994796-5xpcm\" (UID: \"4994f771-6cff-4fcf-819e-9f3fcba71534\") " pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.610716 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a6295e26-77d6-4897-9cbf-3ee03632c58d-auth-proxy-config\") pod \"machine-approver-56656f9798-f2qb5\" (UID: \"a6295e26-77d6-4897-9cbf-3ee03632c58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" Jan 31 04:45:44 crc kubenswrapper[4832]: E0131 04:45:44.610776 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:45.110750157 +0000 UTC m=+154.059571842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.610855 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1956cdaf-8b02-4782-b9cf-2329442f4236-config\") pod \"kube-apiserver-operator-766d6c64bb-qfrnd\" (UID: \"1956cdaf-8b02-4782-b9cf-2329442f4236\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.610984 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b0bbb98c-8414-4ec1-b718-02f5658451dc-metrics-tls\") pod \"dns-operator-744455d44c-87hn6\" (UID: \"b0bbb98c-8414-4ec1-b718-02f5658451dc\") " pod="openshift-dns-operator/dns-operator-744455d44c-87hn6" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611011 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4c69c88d-cd93-44c1-9732-c307060907ec-srv-cert\") pod \"olm-operator-6b444d44fb-c8qc5\" (UID: \"4c69c88d-cd93-44c1-9732-c307060907ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611030 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4nx8\" (UniqueName: \"kubernetes.io/projected/70780eee-9367-4fce-923e-fc7b8ec0e88a-kube-api-access-v4nx8\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611050 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmklw\" (UniqueName: \"kubernetes.io/projected/6eb6b06e-b6b2-419d-8415-945f7e5cb2ac-kube-api-access-pmklw\") pod \"ingress-canary-dpgh5\" (UID: \"6eb6b06e-b6b2-419d-8415-945f7e5cb2ac\") " pod="openshift-ingress-canary/ingress-canary-dpgh5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611067 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc5aa137-eb42-474a-ac07-566db2485e11-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ngvhm\" (UID: \"bc5aa137-eb42-474a-ac07-566db2485e11\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611091 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-bound-sa-token\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611159 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-trusted-ca\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611181 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4994f771-6cff-4fcf-819e-9f3fcba71534-metrics-certs\") pod \"router-default-5444994796-5xpcm\" (UID: \"4994f771-6cff-4fcf-819e-9f3fcba71534\") " pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611199 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps6sd\" (UniqueName: \"kubernetes.io/projected/5797b6b7-298b-4e04-8945-0a733f37feaa-kube-api-access-ps6sd\") pod \"control-plane-machine-set-operator-78cbb6b69f-9kpwz\" (UID: \"5797b6b7-298b-4e04-8945-0a733f37feaa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9kpwz" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611220 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnb8g\" (UniqueName: \"kubernetes.io/projected/830ed5a3-2ef6-4f41-b2a1-3c389ed95c29-kube-api-access-xnb8g\") pod \"multus-admission-controller-857f4d67dd-rggw8\" (UID: \"830ed5a3-2ef6-4f41-b2a1-3c389ed95c29\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rggw8" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611248 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm26q\" (UniqueName: \"kubernetes.io/projected/b0bbb98c-8414-4ec1-b718-02f5658451dc-kube-api-access-pm26q\") pod \"dns-operator-744455d44c-87hn6\" (UID: \"b0bbb98c-8414-4ec1-b718-02f5658451dc\") " pod="openshift-dns-operator/dns-operator-744455d44c-87hn6" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611280 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c250428-30ed-4355-9fee-712f4471071c-secret-volume\") pod \"collect-profiles-29497245-nhr44\" (UID: \"3c250428-30ed-4355-9fee-712f4471071c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611294 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89460b6e-1b52-46cc-be8e-2615240712bc-config\") pod \"kube-controller-manager-operator-78b949d7b-9zg27\" (UID: \"89460b6e-1b52-46cc-be8e-2615240712bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611310 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89460b6e-1b52-46cc-be8e-2615240712bc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9zg27\" (UID: \"89460b6e-1b52-46cc-be8e-2615240712bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611324 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db0f0a11-f2c4-4358-8a5a-f6f992f0efc7-metrics-tls\") pod \"dns-default-zz6xx\" (UID: \"db0f0a11-f2c4-4358-8a5a-f6f992f0efc7\") " pod="openshift-dns/dns-default-zz6xx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611341 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-trusted-ca-bundle\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611368 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj2p6\" (UniqueName: \"kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-kube-api-access-wj2p6\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611386 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghrw4\" (UniqueName: \"kubernetes.io/projected/b1263c5e-889e-4e8a-8413-4385286b66dd-kube-api-access-ghrw4\") pod \"packageserver-d55dfcdfc-47cvt\" (UID: \"b1263c5e-889e-4e8a-8413-4385286b66dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611415 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611456 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-oauth-config\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611472 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b780f08-8776-4bb1-98c4-c119a0d109df-trusted-ca\") pod \"ingress-operator-5b745b69d9-q7k6g\" (UID: \"3b780f08-8776-4bb1-98c4-c119a0d109df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611486 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/648f7a8d-616d-4519-9131-aacc3768b09e-certs\") pod \"machine-config-server-wht2v\" (UID: \"648f7a8d-616d-4519-9131-aacc3768b09e\") " pod="openshift-machine-config-operator/machine-config-server-wht2v" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611509 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9b727897-88a9-4e96-9276-6c1df4780622-signing-key\") pod \"service-ca-9c57cc56f-48nqf\" (UID: \"9b727897-88a9-4e96-9276-6c1df4780622\") " pod="openshift-service-ca/service-ca-9c57cc56f-48nqf" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611526 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/da610d4a-5f00-4de1-a770-9500e64624ed-registration-dir\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611539 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4c69c88d-cd93-44c1-9732-c307060907ec-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c8qc5\" (UID: \"4c69c88d-cd93-44c1-9732-c307060907ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611554 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-config\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611580 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvx9f\" (UniqueName: \"kubernetes.io/projected/4994f771-6cff-4fcf-819e-9f3fcba71534-kube-api-access-wvx9f\") pod \"router-default-5444994796-5xpcm\" (UID: \"4994f771-6cff-4fcf-819e-9f3fcba71534\") " pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611597 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn5s8\" (UniqueName: \"kubernetes.io/projected/8e0615e3-499b-475a-b9f1-8f15e9706259-kube-api-access-tn5s8\") pod \"machine-config-operator-74547568cd-r5sxd\" (UID: \"8e0615e3-499b-475a-b9f1-8f15e9706259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611634 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-registry-tls\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611650 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq5jf\" (UniqueName: \"kubernetes.io/projected/bc5aa137-eb42-474a-ac07-566db2485e11-kube-api-access-rq5jf\") pod \"machine-config-controller-84d6567774-ngvhm\" (UID: \"bc5aa137-eb42-474a-ac07-566db2485e11\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611665 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-oauth-serving-cert\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611681 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-serving-cert\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611698 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a6295e26-77d6-4897-9cbf-3ee03632c58d-machine-approver-tls\") pod \"machine-approver-56656f9798-f2qb5\" (UID: \"a6295e26-77d6-4897-9cbf-3ee03632c58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611713 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a579230-dc70-4599-bbf7-39825122f599-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfstn\" (UID: \"7a579230-dc70-4599-bbf7-39825122f599\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611742 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6201a1f3-1e3f-43fa-9299-14f4d7d6ac02-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v8m5p\" (UID: \"6201a1f3-1e3f-43fa-9299-14f4d7d6ac02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611776 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gwxc\" (UniqueName: \"kubernetes.io/projected/3b780f08-8776-4bb1-98c4-c119a0d109df-kube-api-access-8gwxc\") pod \"ingress-operator-5b745b69d9-q7k6g\" (UID: \"3b780f08-8776-4bb1-98c4-c119a0d109df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611798 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6295e26-77d6-4897-9cbf-3ee03632c58d-config\") pod \"machine-approver-56656f9798-f2qb5\" (UID: \"a6295e26-77d6-4897-9cbf-3ee03632c58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611823 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/843a67ad-547e-4b69-9cce-e193d0da6f29-config\") pod \"service-ca-operator-777779d784-tb2kx\" (UID: \"843a67ad-547e-4b69-9cce-e193d0da6f29\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611842 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/da610d4a-5f00-4de1-a770-9500e64624ed-socket-dir\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611872 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84t7x\" (UniqueName: \"kubernetes.io/projected/3c250428-30ed-4355-9fee-712f4471071c-kube-api-access-84t7x\") pod \"collect-profiles-29497245-nhr44\" (UID: \"3c250428-30ed-4355-9fee-712f4471071c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611894 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b1263c5e-889e-4e8a-8413-4385286b66dd-apiservice-cert\") pod \"packageserver-d55dfcdfc-47cvt\" (UID: \"b1263c5e-889e-4e8a-8413-4385286b66dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611917 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1956cdaf-8b02-4782-b9cf-2329442f4236-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qfrnd\" (UID: \"1956cdaf-8b02-4782-b9cf-2329442f4236\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611942 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgwfk\" (UniqueName: \"kubernetes.io/projected/843a67ad-547e-4b69-9cce-e193d0da6f29-kube-api-access-hgwfk\") pod \"service-ca-operator-777779d784-tb2kx\" (UID: \"843a67ad-547e-4b69-9cce-e193d0da6f29\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611965 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ntcc\" (UniqueName: \"kubernetes.io/projected/df6a6161-bca0-4cca-8fa7-7c19a4cadf8e-kube-api-access-7ntcc\") pod \"migrator-59844c95c7-z98ht\" (UID: \"df6a6161-bca0-4cca-8fa7-7c19a4cadf8e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z98ht" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.611994 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db0f0a11-f2c4-4358-8a5a-f6f992f0efc7-config-volume\") pod \"dns-default-zz6xx\" (UID: \"db0f0a11-f2c4-4358-8a5a-f6f992f0efc7\") " pod="openshift-dns/dns-default-zz6xx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.612096 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/da610d4a-5f00-4de1-a770-9500e64624ed-csi-data-dir\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.612115 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ecec0f9d-d0ba-4ea2-8034-f8608a415488-profile-collector-cert\") pod \"catalog-operator-68c6474976-l7vcp\" (UID: \"ecec0f9d-d0ba-4ea2-8034-f8608a415488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.612144 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5645b412-ae27-4065-a722-a0823e0ade35-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m7pqg\" (UID: \"5645b412-ae27-4065-a722-a0823e0ade35\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.612161 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/da610d4a-5f00-4de1-a770-9500e64624ed-plugins-dir\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.612184 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc5aa137-eb42-474a-ac07-566db2485e11-proxy-tls\") pod \"machine-config-controller-84d6567774-ngvhm\" (UID: \"bc5aa137-eb42-474a-ac07-566db2485e11\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.612202 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6eb6b06e-b6b2-419d-8415-945f7e5cb2ac-cert\") pod \"ingress-canary-dpgh5\" (UID: \"6eb6b06e-b6b2-419d-8415-945f7e5cb2ac\") " pod="openshift-ingress-canary/ingress-canary-dpgh5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.612787 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ecec0f9d-d0ba-4ea2-8034-f8608a415488-srv-cert\") pod \"catalog-operator-68c6474976-l7vcp\" (UID: \"ecec0f9d-d0ba-4ea2-8034-f8608a415488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.612989 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-config\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.614954 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1956cdaf-8b02-4782-b9cf-2329442f4236-config\") pod \"kube-apiserver-operator-766d6c64bb-qfrnd\" (UID: \"1956cdaf-8b02-4782-b9cf-2329442f4236\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.622631 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.623084 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-oauth-serving-cert\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.623226 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b0bbb98c-8414-4ec1-b718-02f5658451dc-metrics-tls\") pod \"dns-operator-744455d44c-87hn6\" (UID: \"b0bbb98c-8414-4ec1-b718-02f5658451dc\") " pod="openshift-dns-operator/dns-operator-744455d44c-87hn6" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.624804 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-trusted-ca-bundle\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.625974 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4994f771-6cff-4fcf-819e-9f3fcba71534-stats-auth\") pod \"router-default-5444994796-5xpcm\" (UID: \"4994f771-6cff-4fcf-819e-9f3fcba71534\") " pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.626349 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-service-ca\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.626403 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5797b6b7-298b-4e04-8945-0a733f37feaa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9kpwz\" (UID: \"5797b6b7-298b-4e04-8945-0a733f37feaa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9kpwz" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.626475 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9b727897-88a9-4e96-9276-6c1df4780622-signing-cabundle\") pod \"service-ca-9c57cc56f-48nqf\" (UID: \"9b727897-88a9-4e96-9276-6c1df4780622\") " pod="openshift-service-ca/service-ca-9c57cc56f-48nqf" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.626505 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e0615e3-499b-475a-b9f1-8f15e9706259-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r5sxd\" (UID: \"8e0615e3-499b-475a-b9f1-8f15e9706259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.626538 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1956cdaf-8b02-4782-b9cf-2329442f4236-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qfrnd\" (UID: \"1956cdaf-8b02-4782-b9cf-2329442f4236\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.626581 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e0615e3-499b-475a-b9f1-8f15e9706259-proxy-tls\") pod \"machine-config-operator-74547568cd-r5sxd\" (UID: \"8e0615e3-499b-475a-b9f1-8f15e9706259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.626626 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c250428-30ed-4355-9fee-712f4471071c-config-volume\") pod \"collect-profiles-29497245-nhr44\" (UID: \"3c250428-30ed-4355-9fee-712f4471071c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.627600 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-service-ca\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.628700 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-serving-cert\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.628811 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-trusted-ca\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.631277 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5645b412-ae27-4065-a722-a0823e0ade35-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m7pqg\" (UID: \"5645b412-ae27-4065-a722-a0823e0ade35\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.632148 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b780f08-8776-4bb1-98c4-c119a0d109df-trusted-ca\") pod \"ingress-operator-5b745b69d9-q7k6g\" (UID: \"3b780f08-8776-4bb1-98c4-c119a0d109df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.632506 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.632734 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b780f08-8776-4bb1-98c4-c119a0d109df-metrics-tls\") pod \"ingress-operator-5b745b69d9-q7k6g\" (UID: \"3b780f08-8776-4bb1-98c4-c119a0d109df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.632785 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2rzk\" (UniqueName: \"kubernetes.io/projected/ecec0f9d-d0ba-4ea2-8034-f8608a415488-kube-api-access-z2rzk\") pod \"catalog-operator-68c6474976-l7vcp\" (UID: \"ecec0f9d-d0ba-4ea2-8034-f8608a415488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.632824 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/305e29e9-933c-4098-a650-7d06eacb2ed6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vg9j4\" (UID: \"305e29e9-933c-4098-a650-7d06eacb2ed6\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.632883 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b1263c5e-889e-4e8a-8413-4385286b66dd-tmpfs\") pod \"packageserver-d55dfcdfc-47cvt\" (UID: \"b1263c5e-889e-4e8a-8413-4385286b66dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.633173 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/305e29e9-933c-4098-a650-7d06eacb2ed6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vg9j4\" (UID: \"305e29e9-933c-4098-a650-7d06eacb2ed6\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.633233 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ac5ff82-55d0-4b2d-ab38-7c63c440d523-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rg2nc\" (UID: \"1ac5ff82-55d0-4b2d-ab38-7c63c440d523\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.633268 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/da610d4a-5f00-4de1-a770-9500e64624ed-mountpoint-dir\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.633334 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a579230-dc70-4599-bbf7-39825122f599-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfstn\" (UID: \"7a579230-dc70-4599-bbf7-39825122f599\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.633376 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e0615e3-499b-475a-b9f1-8f15e9706259-images\") pod \"machine-config-operator-74547568cd-r5sxd\" (UID: \"8e0615e3-499b-475a-b9f1-8f15e9706259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.634091 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/830ed5a3-2ef6-4f41-b2a1-3c389ed95c29-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rggw8\" (UID: \"830ed5a3-2ef6-4f41-b2a1-3c389ed95c29\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rggw8" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.634159 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3b780f08-8776-4bb1-98c4-c119a0d109df-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q7k6g\" (UID: \"3b780f08-8776-4bb1-98c4-c119a0d109df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.634197 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b1263c5e-889e-4e8a-8413-4385286b66dd-webhook-cert\") pod \"packageserver-d55dfcdfc-47cvt\" (UID: \"b1263c5e-889e-4e8a-8413-4385286b66dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.634254 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6201a1f3-1e3f-43fa-9299-14f4d7d6ac02-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v8m5p\" (UID: \"6201a1f3-1e3f-43fa-9299-14f4d7d6ac02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.636287 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4hjw\" (UniqueName: \"kubernetes.io/projected/a6295e26-77d6-4897-9cbf-3ee03632c58d-kube-api-access-t4hjw\") pod \"machine-approver-56656f9798-f2qb5\" (UID: \"a6295e26-77d6-4897-9cbf-3ee03632c58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.636366 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb2lm\" (UniqueName: \"kubernetes.io/projected/da610d4a-5f00-4de1-a770-9500e64624ed-kube-api-access-kb2lm\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.636402 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59n2j\" (UniqueName: \"kubernetes.io/projected/305e29e9-933c-4098-a650-7d06eacb2ed6-kube-api-access-59n2j\") pod \"marketplace-operator-79b997595-vg9j4\" (UID: \"305e29e9-933c-4098-a650-7d06eacb2ed6\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.636541 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-registry-certificates\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.636626 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/648f7a8d-616d-4519-9131-aacc3768b09e-node-bootstrap-token\") pod \"machine-config-server-wht2v\" (UID: \"648f7a8d-616d-4519-9131-aacc3768b09e\") " pod="openshift-machine-config-operator/machine-config-server-wht2v" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.637236 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1956cdaf-8b02-4782-b9cf-2329442f4236-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qfrnd\" (UID: \"1956cdaf-8b02-4782-b9cf-2329442f4236\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.637796 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxkm7\" (UniqueName: \"kubernetes.io/projected/4c69c88d-cd93-44c1-9732-c307060907ec-kube-api-access-rxkm7\") pod \"olm-operator-6b444d44fb-c8qc5\" (UID: \"4c69c88d-cd93-44c1-9732-c307060907ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.637855 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6201a1f3-1e3f-43fa-9299-14f4d7d6ac02-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v8m5p\" (UID: \"6201a1f3-1e3f-43fa-9299-14f4d7d6ac02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.638687 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6201a1f3-1e3f-43fa-9299-14f4d7d6ac02-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v8m5p\" (UID: \"6201a1f3-1e3f-43fa-9299-14f4d7d6ac02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.638771 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-registry-certificates\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.638921 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4994f771-6cff-4fcf-819e-9f3fcba71534-service-ca-bundle\") pod \"router-default-5444994796-5xpcm\" (UID: \"4994f771-6cff-4fcf-819e-9f3fcba71534\") " pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.639121 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8h9b\" (UniqueName: \"kubernetes.io/projected/7a579230-dc70-4599-bbf7-39825122f599-kube-api-access-t8h9b\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfstn\" (UID: \"7a579230-dc70-4599-bbf7-39825122f599\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.639165 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhrm4\" (UniqueName: \"kubernetes.io/projected/1ac5ff82-55d0-4b2d-ab38-7c63c440d523-kube-api-access-mhrm4\") pod \"package-server-manager-789f6589d5-rg2nc\" (UID: \"1ac5ff82-55d0-4b2d-ab38-7c63c440d523\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.639224 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4994f771-6cff-4fcf-819e-9f3fcba71534-metrics-certs\") pod \"router-default-5444994796-5xpcm\" (UID: \"4994f771-6cff-4fcf-819e-9f3fcba71534\") " pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.639753 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89460b6e-1b52-46cc-be8e-2615240712bc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9zg27\" (UID: \"89460b6e-1b52-46cc-be8e-2615240712bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.641103 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6201a1f3-1e3f-43fa-9299-14f4d7d6ac02-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v8m5p\" (UID: \"6201a1f3-1e3f-43fa-9299-14f4d7d6ac02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.642722 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b780f08-8776-4bb1-98c4-c119a0d109df-metrics-tls\") pod \"ingress-operator-5b745b69d9-q7k6g\" (UID: \"3b780f08-8776-4bb1-98c4-c119a0d109df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.645338 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-oauth-config\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.649791 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.650194 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5645b412-ae27-4065-a722-a0823e0ade35-serving-cert\") pod \"openshift-config-operator-7777fb866f-m7pqg\" (UID: \"5645b412-ae27-4065-a722-a0823e0ade35\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.653133 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-registry-tls\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.653649 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj2p6\" (UniqueName: \"kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-kube-api-access-wj2p6\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.659823 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4994f771-6cff-4fcf-819e-9f3fcba71534-default-certificate\") pod \"router-default-5444994796-5xpcm\" (UID: \"4994f771-6cff-4fcf-819e-9f3fcba71534\") " pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.675688 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp8r8\" (UniqueName: \"kubernetes.io/projected/5645b412-ae27-4065-a722-a0823e0ade35-kube-api-access-tp8r8\") pod \"openshift-config-operator-7777fb866f-m7pqg\" (UID: \"5645b412-ae27-4065-a722-a0823e0ade35\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.689513 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvx9f\" (UniqueName: \"kubernetes.io/projected/4994f771-6cff-4fcf-819e-9f3fcba71534-kube-api-access-wvx9f\") pod \"router-default-5444994796-5xpcm\" (UID: \"4994f771-6cff-4fcf-819e-9f3fcba71534\") " pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.707229 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4nx8\" (UniqueName: \"kubernetes.io/projected/70780eee-9367-4fce-923e-fc7b8ec0e88a-kube-api-access-v4nx8\") pod \"console-f9d7485db-sjkqt\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.726336 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98tzb" event={"ID":"7b9d087d-2335-4629-8cc8-6eeb320aa797","Type":"ContainerStarted","Data":"826b86d81be8a4f589d0191ee282b50532981fb53e25afeac8e72dc1118f77b0"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.726393 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98tzb" event={"ID":"7b9d087d-2335-4629-8cc8-6eeb320aa797","Type":"ContainerStarted","Data":"ee31e7397462e086aec950b4e183926fb75f865711d6195b787d9c0069820de2"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.726405 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98tzb" event={"ID":"7b9d087d-2335-4629-8cc8-6eeb320aa797","Type":"ContainerStarted","Data":"b281e7e5648d199538fbe9bde064a121da968b5d4b2a19942f0fcefd0d4adc5f"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.727355 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jkgd6"] Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.728926 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gwxc\" (UniqueName: \"kubernetes.io/projected/3b780f08-8776-4bb1-98c4-c119a0d109df-kube-api-access-8gwxc\") pod \"ingress-operator-5b745b69d9-q7k6g\" (UID: \"3b780f08-8776-4bb1-98c4-c119a0d109df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.730801 4832 generic.go:334] "Generic (PLEG): container finished" podID="da3cda05-4158-4df2-93ab-6526af2232ba" containerID="7e009dbeaeb9f6bf35731371b2b99287ae086a054bf831bade82676801ec05d8" exitCode=0 Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.731797 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" event={"ID":"da3cda05-4158-4df2-93ab-6526af2232ba","Type":"ContainerDied","Data":"7e009dbeaeb9f6bf35731371b2b99287ae086a054bf831bade82676801ec05d8"} Jan 31 04:45:44 crc kubenswrapper[4832]: W0131 04:45:44.735396 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06433b0e_798d_4bc9_90da_1a92bbc86acd.slice/crio-a4eb8d16da64c3a3134eb4cea2ad4a83e1d3a9e3178b465574331201ee803a46 WatchSource:0}: Error finding container a4eb8d16da64c3a3134eb4cea2ad4a83e1d3a9e3178b465574331201ee803a46: Status 404 returned error can't find the container with id a4eb8d16da64c3a3134eb4cea2ad4a83e1d3a9e3178b465574331201ee803a46 Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.735641 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt" event={"ID":"cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf","Type":"ContainerStarted","Data":"a854e2d6709b1fc2b40c25f573ac8d3aef64e3dfc441e68b6d3bccf83bdf05e9"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.735674 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt" event={"ID":"cc7c44f0-ba3a-4b94-bed8-6be6d94b5abf","Type":"ContainerStarted","Data":"dcb3f58ff655b105e1254e2ccb92fef6497a703967cb83cd2a3486700f5a3f23"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.741007 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" event={"ID":"d08c2681-edcf-4634-aede-63eb081e72a0","Type":"ContainerStarted","Data":"e7aeb86633b5eb84209e5a351e0832884104ca4cd8faa265cc5b3c56c81d5d1c"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.741088 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" event={"ID":"d08c2681-edcf-4634-aede-63eb081e72a0","Type":"ContainerStarted","Data":"55a04650b11c780f01bd2e0875f1a64d7c5ac6ed67ba70b43d9309316348f123"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.741431 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6"] Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.742442 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.742648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnb8g\" (UniqueName: \"kubernetes.io/projected/830ed5a3-2ef6-4f41-b2a1-3c389ed95c29-kube-api-access-xnb8g\") pod \"multus-admission-controller-857f4d67dd-rggw8\" (UID: \"830ed5a3-2ef6-4f41-b2a1-3c389ed95c29\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rggw8" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.742697 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c250428-30ed-4355-9fee-712f4471071c-secret-volume\") pod \"collect-profiles-29497245-nhr44\" (UID: \"3c250428-30ed-4355-9fee-712f4471071c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.742723 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89460b6e-1b52-46cc-be8e-2615240712bc-config\") pod \"kube-controller-manager-operator-78b949d7b-9zg27\" (UID: \"89460b6e-1b52-46cc-be8e-2615240712bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.742746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89460b6e-1b52-46cc-be8e-2615240712bc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9zg27\" (UID: \"89460b6e-1b52-46cc-be8e-2615240712bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.742767 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db0f0a11-f2c4-4358-8a5a-f6f992f0efc7-metrics-tls\") pod \"dns-default-zz6xx\" (UID: \"db0f0a11-f2c4-4358-8a5a-f6f992f0efc7\") " pod="openshift-dns/dns-default-zz6xx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.742790 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghrw4\" (UniqueName: \"kubernetes.io/projected/b1263c5e-889e-4e8a-8413-4385286b66dd-kube-api-access-ghrw4\") pod \"packageserver-d55dfcdfc-47cvt\" (UID: \"b1263c5e-889e-4e8a-8413-4385286b66dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.742811 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/648f7a8d-616d-4519-9131-aacc3768b09e-certs\") pod \"machine-config-server-wht2v\" (UID: \"648f7a8d-616d-4519-9131-aacc3768b09e\") " pod="openshift-machine-config-operator/machine-config-server-wht2v" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.742831 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/da610d4a-5f00-4de1-a770-9500e64624ed-registration-dir\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.742863 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4c69c88d-cd93-44c1-9732-c307060907ec-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c8qc5\" (UID: \"4c69c88d-cd93-44c1-9732-c307060907ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.742884 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9b727897-88a9-4e96-9276-6c1df4780622-signing-key\") pod \"service-ca-9c57cc56f-48nqf\" (UID: \"9b727897-88a9-4e96-9276-6c1df4780622\") " pod="openshift-service-ca/service-ca-9c57cc56f-48nqf" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.742918 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn5s8\" (UniqueName: \"kubernetes.io/projected/8e0615e3-499b-475a-b9f1-8f15e9706259-kube-api-access-tn5s8\") pod \"machine-config-operator-74547568cd-r5sxd\" (UID: \"8e0615e3-499b-475a-b9f1-8f15e9706259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.742948 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq5jf\" (UniqueName: \"kubernetes.io/projected/bc5aa137-eb42-474a-ac07-566db2485e11-kube-api-access-rq5jf\") pod \"machine-config-controller-84d6567774-ngvhm\" (UID: \"bc5aa137-eb42-474a-ac07-566db2485e11\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.742971 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a579230-dc70-4599-bbf7-39825122f599-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfstn\" (UID: \"7a579230-dc70-4599-bbf7-39825122f599\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.742991 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a6295e26-77d6-4897-9cbf-3ee03632c58d-machine-approver-tls\") pod \"machine-approver-56656f9798-f2qb5\" (UID: \"a6295e26-77d6-4897-9cbf-3ee03632c58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743024 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6295e26-77d6-4897-9cbf-3ee03632c58d-config\") pod \"machine-approver-56656f9798-f2qb5\" (UID: \"a6295e26-77d6-4897-9cbf-3ee03632c58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743043 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/da610d4a-5f00-4de1-a770-9500e64624ed-socket-dir\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743065 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/843a67ad-547e-4b69-9cce-e193d0da6f29-config\") pod \"service-ca-operator-777779d784-tb2kx\" (UID: \"843a67ad-547e-4b69-9cce-e193d0da6f29\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743092 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84t7x\" (UniqueName: \"kubernetes.io/projected/3c250428-30ed-4355-9fee-712f4471071c-kube-api-access-84t7x\") pod \"collect-profiles-29497245-nhr44\" (UID: \"3c250428-30ed-4355-9fee-712f4471071c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743114 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b1263c5e-889e-4e8a-8413-4385286b66dd-apiservice-cert\") pod \"packageserver-d55dfcdfc-47cvt\" (UID: \"b1263c5e-889e-4e8a-8413-4385286b66dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743135 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ntcc\" (UniqueName: \"kubernetes.io/projected/df6a6161-bca0-4cca-8fa7-7c19a4cadf8e-kube-api-access-7ntcc\") pod \"migrator-59844c95c7-z98ht\" (UID: \"df6a6161-bca0-4cca-8fa7-7c19a4cadf8e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z98ht" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743158 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db0f0a11-f2c4-4358-8a5a-f6f992f0efc7-config-volume\") pod \"dns-default-zz6xx\" (UID: \"db0f0a11-f2c4-4358-8a5a-f6f992f0efc7\") " pod="openshift-dns/dns-default-zz6xx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743179 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgwfk\" (UniqueName: \"kubernetes.io/projected/843a67ad-547e-4b69-9cce-e193d0da6f29-kube-api-access-hgwfk\") pod \"service-ca-operator-777779d784-tb2kx\" (UID: \"843a67ad-547e-4b69-9cce-e193d0da6f29\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743203 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/da610d4a-5f00-4de1-a770-9500e64624ed-plugins-dir\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743225 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/da610d4a-5f00-4de1-a770-9500e64624ed-csi-data-dir\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743245 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ecec0f9d-d0ba-4ea2-8034-f8608a415488-profile-collector-cert\") pod \"catalog-operator-68c6474976-l7vcp\" (UID: \"ecec0f9d-d0ba-4ea2-8034-f8608a415488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743265 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc5aa137-eb42-474a-ac07-566db2485e11-proxy-tls\") pod \"machine-config-controller-84d6567774-ngvhm\" (UID: \"bc5aa137-eb42-474a-ac07-566db2485e11\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743293 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ecec0f9d-d0ba-4ea2-8034-f8608a415488-srv-cert\") pod \"catalog-operator-68c6474976-l7vcp\" (UID: \"ecec0f9d-d0ba-4ea2-8034-f8608a415488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743314 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6eb6b06e-b6b2-419d-8415-945f7e5cb2ac-cert\") pod \"ingress-canary-dpgh5\" (UID: \"6eb6b06e-b6b2-419d-8415-945f7e5cb2ac\") " pod="openshift-ingress-canary/ingress-canary-dpgh5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5797b6b7-298b-4e04-8945-0a733f37feaa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9kpwz\" (UID: \"5797b6b7-298b-4e04-8945-0a733f37feaa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9kpwz" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743370 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9b727897-88a9-4e96-9276-6c1df4780622-signing-cabundle\") pod \"service-ca-9c57cc56f-48nqf\" (UID: \"9b727897-88a9-4e96-9276-6c1df4780622\") " pod="openshift-service-ca/service-ca-9c57cc56f-48nqf" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743391 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e0615e3-499b-475a-b9f1-8f15e9706259-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r5sxd\" (UID: \"8e0615e3-499b-475a-b9f1-8f15e9706259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743412 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c250428-30ed-4355-9fee-712f4471071c-config-volume\") pod \"collect-profiles-29497245-nhr44\" (UID: \"3c250428-30ed-4355-9fee-712f4471071c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743437 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e0615e3-499b-475a-b9f1-8f15e9706259-proxy-tls\") pod \"machine-config-operator-74547568cd-r5sxd\" (UID: \"8e0615e3-499b-475a-b9f1-8f15e9706259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743474 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2rzk\" (UniqueName: \"kubernetes.io/projected/ecec0f9d-d0ba-4ea2-8034-f8608a415488-kube-api-access-z2rzk\") pod \"catalog-operator-68c6474976-l7vcp\" (UID: \"ecec0f9d-d0ba-4ea2-8034-f8608a415488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743497 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/305e29e9-933c-4098-a650-7d06eacb2ed6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vg9j4\" (UID: \"305e29e9-933c-4098-a650-7d06eacb2ed6\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743518 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b1263c5e-889e-4e8a-8413-4385286b66dd-tmpfs\") pod \"packageserver-d55dfcdfc-47cvt\" (UID: \"b1263c5e-889e-4e8a-8413-4385286b66dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743538 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/305e29e9-933c-4098-a650-7d06eacb2ed6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vg9j4\" (UID: \"305e29e9-933c-4098-a650-7d06eacb2ed6\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743584 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ac5ff82-55d0-4b2d-ab38-7c63c440d523-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rg2nc\" (UID: \"1ac5ff82-55d0-4b2d-ab38-7c63c440d523\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743609 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/da610d4a-5f00-4de1-a770-9500e64624ed-mountpoint-dir\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743699 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a579230-dc70-4599-bbf7-39825122f599-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfstn\" (UID: \"7a579230-dc70-4599-bbf7-39825122f599\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743725 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e0615e3-499b-475a-b9f1-8f15e9706259-images\") pod \"machine-config-operator-74547568cd-r5sxd\" (UID: \"8e0615e3-499b-475a-b9f1-8f15e9706259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743838 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/830ed5a3-2ef6-4f41-b2a1-3c389ed95c29-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rggw8\" (UID: \"830ed5a3-2ef6-4f41-b2a1-3c389ed95c29\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rggw8" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.743999 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b1263c5e-889e-4e8a-8413-4385286b66dd-webhook-cert\") pod \"packageserver-d55dfcdfc-47cvt\" (UID: \"b1263c5e-889e-4e8a-8413-4385286b66dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.744157 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4hjw\" (UniqueName: \"kubernetes.io/projected/a6295e26-77d6-4897-9cbf-3ee03632c58d-kube-api-access-t4hjw\") pod \"machine-approver-56656f9798-f2qb5\" (UID: \"a6295e26-77d6-4897-9cbf-3ee03632c58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.744311 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb2lm\" (UniqueName: \"kubernetes.io/projected/da610d4a-5f00-4de1-a770-9500e64624ed-kube-api-access-kb2lm\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.744455 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59n2j\" (UniqueName: \"kubernetes.io/projected/305e29e9-933c-4098-a650-7d06eacb2ed6-kube-api-access-59n2j\") pod \"marketplace-operator-79b997595-vg9j4\" (UID: \"305e29e9-933c-4098-a650-7d06eacb2ed6\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.744480 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxkm7\" (UniqueName: \"kubernetes.io/projected/4c69c88d-cd93-44c1-9732-c307060907ec-kube-api-access-rxkm7\") pod \"olm-operator-6b444d44fb-c8qc5\" (UID: \"4c69c88d-cd93-44c1-9732-c307060907ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745051 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/648f7a8d-616d-4519-9131-aacc3768b09e-node-bootstrap-token\") pod \"machine-config-server-wht2v\" (UID: \"648f7a8d-616d-4519-9131-aacc3768b09e\") " pod="openshift-machine-config-operator/machine-config-server-wht2v" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745142 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/da610d4a-5f00-4de1-a770-9500e64624ed-registration-dir\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745093 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8h9b\" (UniqueName: \"kubernetes.io/projected/7a579230-dc70-4599-bbf7-39825122f599-kube-api-access-t8h9b\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfstn\" (UID: \"7a579230-dc70-4599-bbf7-39825122f599\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745229 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhrm4\" (UniqueName: \"kubernetes.io/projected/1ac5ff82-55d0-4b2d-ab38-7c63c440d523-kube-api-access-mhrm4\") pod \"package-server-manager-789f6589d5-rg2nc\" (UID: \"1ac5ff82-55d0-4b2d-ab38-7c63c440d523\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745257 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89460b6e-1b52-46cc-be8e-2615240712bc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9zg27\" (UID: \"89460b6e-1b52-46cc-be8e-2615240712bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745283 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9kwc\" (UniqueName: \"kubernetes.io/projected/db0f0a11-f2c4-4358-8a5a-f6f992f0efc7-kube-api-access-h9kwc\") pod \"dns-default-zz6xx\" (UID: \"db0f0a11-f2c4-4358-8a5a-f6f992f0efc7\") " pod="openshift-dns/dns-default-zz6xx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745303 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/843a67ad-547e-4b69-9cce-e193d0da6f29-serving-cert\") pod \"service-ca-operator-777779d784-tb2kx\" (UID: \"843a67ad-547e-4b69-9cce-e193d0da6f29\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745324 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwf7k\" (UniqueName: \"kubernetes.io/projected/648f7a8d-616d-4519-9131-aacc3768b09e-kube-api-access-gwf7k\") pod \"machine-config-server-wht2v\" (UID: \"648f7a8d-616d-4519-9131-aacc3768b09e\") " pod="openshift-machine-config-operator/machine-config-server-wht2v" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745353 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745376 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rwnl\" (UniqueName: \"kubernetes.io/projected/9b727897-88a9-4e96-9276-6c1df4780622-kube-api-access-8rwnl\") pod \"service-ca-9c57cc56f-48nqf\" (UID: \"9b727897-88a9-4e96-9276-6c1df4780622\") " pod="openshift-service-ca/service-ca-9c57cc56f-48nqf" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745413 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a6295e26-77d6-4897-9cbf-3ee03632c58d-auth-proxy-config\") pod \"machine-approver-56656f9798-f2qb5\" (UID: \"a6295e26-77d6-4897-9cbf-3ee03632c58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745436 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4c69c88d-cd93-44c1-9732-c307060907ec-srv-cert\") pod \"olm-operator-6b444d44fb-c8qc5\" (UID: \"4c69c88d-cd93-44c1-9732-c307060907ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745471 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmklw\" (UniqueName: \"kubernetes.io/projected/6eb6b06e-b6b2-419d-8415-945f7e5cb2ac-kube-api-access-pmklw\") pod \"ingress-canary-dpgh5\" (UID: \"6eb6b06e-b6b2-419d-8415-945f7e5cb2ac\") " pod="openshift-ingress-canary/ingress-canary-dpgh5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745497 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc5aa137-eb42-474a-ac07-566db2485e11-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ngvhm\" (UID: \"bc5aa137-eb42-474a-ac07-566db2485e11\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745527 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps6sd\" (UniqueName: \"kubernetes.io/projected/5797b6b7-298b-4e04-8945-0a733f37feaa-kube-api-access-ps6sd\") pod \"control-plane-machine-set-operator-78cbb6b69f-9kpwz\" (UID: \"5797b6b7-298b-4e04-8945-0a733f37feaa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9kpwz" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.745760 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/da610d4a-5f00-4de1-a770-9500e64624ed-plugins-dir\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.746758 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6201a1f3-1e3f-43fa-9299-14f4d7d6ac02-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v8m5p\" (UID: \"6201a1f3-1e3f-43fa-9299-14f4d7d6ac02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.746828 4832 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fkwvm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.746889 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" podUID="d08c2681-edcf-4634-aede-63eb081e72a0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.747032 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c250428-30ed-4355-9fee-712f4471071c-config-volume\") pod \"collect-profiles-29497245-nhr44\" (UID: \"3c250428-30ed-4355-9fee-712f4471071c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.747403 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/843a67ad-547e-4b69-9cce-e193d0da6f29-config\") pod \"service-ca-operator-777779d784-tb2kx\" (UID: \"843a67ad-547e-4b69-9cce-e193d0da6f29\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.748174 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db0f0a11-f2c4-4358-8a5a-f6f992f0efc7-config-volume\") pod \"dns-default-zz6xx\" (UID: \"db0f0a11-f2c4-4358-8a5a-f6f992f0efc7\") " pod="openshift-dns/dns-default-zz6xx" Jan 31 04:45:44 crc kubenswrapper[4832]: E0131 04:45:44.748821 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:45.248808112 +0000 UTC m=+154.197629797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.750468 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/da610d4a-5f00-4de1-a770-9500e64624ed-csi-data-dir\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.751407 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e0615e3-499b-475a-b9f1-8f15e9706259-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r5sxd\" (UID: \"8e0615e3-499b-475a-b9f1-8f15e9706259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.751838 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c250428-30ed-4355-9fee-712f4471071c-secret-volume\") pod \"collect-profiles-29497245-nhr44\" (UID: \"3c250428-30ed-4355-9fee-712f4471071c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.752798 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc5aa137-eb42-474a-ac07-566db2485e11-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ngvhm\" (UID: \"bc5aa137-eb42-474a-ac07-566db2485e11\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.754172 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a579230-dc70-4599-bbf7-39825122f599-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfstn\" (UID: \"7a579230-dc70-4599-bbf7-39825122f599\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.755222 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/843a67ad-547e-4b69-9cce-e193d0da6f29-serving-cert\") pod \"service-ca-operator-777779d784-tb2kx\" (UID: \"843a67ad-547e-4b69-9cce-e193d0da6f29\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.755993 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4c69c88d-cd93-44c1-9732-c307060907ec-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c8qc5\" (UID: \"4c69c88d-cd93-44c1-9732-c307060907ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.756153 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/da610d4a-5f00-4de1-a770-9500e64624ed-mountpoint-dir\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.756398 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ecec0f9d-d0ba-4ea2-8034-f8608a415488-profile-collector-cert\") pod \"catalog-operator-68c6474976-l7vcp\" (UID: \"ecec0f9d-d0ba-4ea2-8034-f8608a415488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.757068 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a579230-dc70-4599-bbf7-39825122f599-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfstn\" (UID: \"7a579230-dc70-4599-bbf7-39825122f599\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.757122 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" event={"ID":"9ed37686-689b-46e5-8069-0e4de3519afb","Type":"ContainerStarted","Data":"79288ba6624132fdff67610d5c5f2592a20d50faefe7180292922d635e2bc5b7"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.757163 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" event={"ID":"9ed37686-689b-46e5-8069-0e4de3519afb","Type":"ContainerStarted","Data":"61101262bda85075af4350d37144ea6cc99c649bdcd973ad9c2c6e2a48f9891d"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.757592 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e0615e3-499b-475a-b9f1-8f15e9706259-proxy-tls\") pod \"machine-config-operator-74547568cd-r5sxd\" (UID: \"8e0615e3-499b-475a-b9f1-8f15e9706259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.757905 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8e0615e3-499b-475a-b9f1-8f15e9706259-images\") pod \"machine-config-operator-74547568cd-r5sxd\" (UID: \"8e0615e3-499b-475a-b9f1-8f15e9706259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.757921 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b1263c5e-889e-4e8a-8413-4385286b66dd-tmpfs\") pod \"packageserver-d55dfcdfc-47cvt\" (UID: \"b1263c5e-889e-4e8a-8413-4385286b66dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.758358 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9b727897-88a9-4e96-9276-6c1df4780622-signing-key\") pod \"service-ca-9c57cc56f-48nqf\" (UID: \"9b727897-88a9-4e96-9276-6c1df4780622\") " pod="openshift-service-ca/service-ca-9c57cc56f-48nqf" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.758415 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/830ed5a3-2ef6-4f41-b2a1-3c389ed95c29-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rggw8\" (UID: \"830ed5a3-2ef6-4f41-b2a1-3c389ed95c29\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rggw8" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.758510 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/da610d4a-5f00-4de1-a770-9500e64624ed-socket-dir\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.758769 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ecec0f9d-d0ba-4ea2-8034-f8608a415488-srv-cert\") pod \"catalog-operator-68c6474976-l7vcp\" (UID: \"ecec0f9d-d0ba-4ea2-8034-f8608a415488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.759994 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6eb6b06e-b6b2-419d-8415-945f7e5cb2ac-cert\") pod \"ingress-canary-dpgh5\" (UID: \"6eb6b06e-b6b2-419d-8415-945f7e5cb2ac\") " pod="openshift-ingress-canary/ingress-canary-dpgh5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.760610 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89460b6e-1b52-46cc-be8e-2615240712bc-config\") pod \"kube-controller-manager-operator-78b949d7b-9zg27\" (UID: \"89460b6e-1b52-46cc-be8e-2615240712bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.762462 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6295e26-77d6-4897-9cbf-3ee03632c58d-config\") pod \"machine-approver-56656f9798-f2qb5\" (UID: \"a6295e26-77d6-4897-9cbf-3ee03632c58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.762525 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89460b6e-1b52-46cc-be8e-2615240712bc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9zg27\" (UID: \"89460b6e-1b52-46cc-be8e-2615240712bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.764015 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/648f7a8d-616d-4519-9131-aacc3768b09e-certs\") pod \"machine-config-server-wht2v\" (UID: \"648f7a8d-616d-4519-9131-aacc3768b09e\") " pod="openshift-machine-config-operator/machine-config-server-wht2v" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.764129 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/648f7a8d-616d-4519-9131-aacc3768b09e-node-bootstrap-token\") pod \"machine-config-server-wht2v\" (UID: \"648f7a8d-616d-4519-9131-aacc3768b09e\") " pod="openshift-machine-config-operator/machine-config-server-wht2v" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.764630 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/305e29e9-933c-4098-a650-7d06eacb2ed6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vg9j4\" (UID: \"305e29e9-933c-4098-a650-7d06eacb2ed6\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.765428 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/305e29e9-933c-4098-a650-7d06eacb2ed6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vg9j4\" (UID: \"305e29e9-933c-4098-a650-7d06eacb2ed6\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.765800 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-bound-sa-token\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.766937 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4c69c88d-cd93-44c1-9732-c307060907ec-srv-cert\") pod \"olm-operator-6b444d44fb-c8qc5\" (UID: \"4c69c88d-cd93-44c1-9732-c307060907ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.767048 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tphpp" event={"ID":"577e3549-41e2-4af0-9b37-807d419dfbb9","Type":"ContainerStarted","Data":"90372767d0a8834d433870c2c9e87cb69f586074617223d261403972698ae2e3"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.767094 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tphpp" event={"ID":"577e3549-41e2-4af0-9b37-807d419dfbb9","Type":"ContainerStarted","Data":"70c463d1c671b8a38a8ac2b1fcf27335f0274eeb1e982d414320a4a97001caa6"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.767209 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc5aa137-eb42-474a-ac07-566db2485e11-proxy-tls\") pod \"machine-config-controller-84d6567774-ngvhm\" (UID: \"bc5aa137-eb42-474a-ac07-566db2485e11\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.767547 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b1263c5e-889e-4e8a-8413-4385286b66dd-webhook-cert\") pod \"packageserver-d55dfcdfc-47cvt\" (UID: \"b1263c5e-889e-4e8a-8413-4385286b66dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.767600 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tphpp" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.767841 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b1263c5e-889e-4e8a-8413-4385286b66dd-apiservice-cert\") pod \"packageserver-d55dfcdfc-47cvt\" (UID: \"b1263c5e-889e-4e8a-8413-4385286b66dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.770882 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-tphpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.770951 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tphpp" podUID="577e3549-41e2-4af0-9b37-807d419dfbb9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.771502 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ac5ff82-55d0-4b2d-ab38-7c63c440d523-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rg2nc\" (UID: \"1ac5ff82-55d0-4b2d-ab38-7c63c440d523\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.772209 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5797b6b7-298b-4e04-8945-0a733f37feaa-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9kpwz\" (UID: \"5797b6b7-298b-4e04-8945-0a733f37feaa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9kpwz" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.775158 4832 generic.go:334] "Generic (PLEG): container finished" podID="c65562f0-ce98-4b36-aa5a-bbac4b1515fb" containerID="6c6bee93e21a675f8e62d052ade7ed8d720c5c11e25c885e6dba020013bf76d6" exitCode=0 Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.775771 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" event={"ID":"c65562f0-ce98-4b36-aa5a-bbac4b1515fb","Type":"ContainerDied","Data":"6c6bee93e21a675f8e62d052ade7ed8d720c5c11e25c885e6dba020013bf76d6"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.777818 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9b727897-88a9-4e96-9276-6c1df4780622-signing-cabundle\") pod \"service-ca-9c57cc56f-48nqf\" (UID: \"9b727897-88a9-4e96-9276-6c1df4780622\") " pod="openshift-service-ca/service-ca-9c57cc56f-48nqf" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.782972 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" event={"ID":"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af","Type":"ContainerStarted","Data":"e2131faed996a6f4911218182b20248abae33d7a0d9e8fec893bbc77a779615c"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.788928 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db0f0a11-f2c4-4358-8a5a-f6f992f0efc7-metrics-tls\") pod \"dns-default-zz6xx\" (UID: \"db0f0a11-f2c4-4358-8a5a-f6f992f0efc7\") " pod="openshift-dns/dns-default-zz6xx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.789933 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1956cdaf-8b02-4782-b9cf-2329442f4236-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qfrnd\" (UID: \"1956cdaf-8b02-4782-b9cf-2329442f4236\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.795071 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" event={"ID":"944af4a5-8c80-4d24-8f2e-ead3cf864aa9","Type":"ContainerStarted","Data":"776f4b831e28732e19ef850624999c2e20e694317e6a0d0328d2570e41e04d49"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.795123 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" event={"ID":"944af4a5-8c80-4d24-8f2e-ead3cf864aa9","Type":"ContainerStarted","Data":"c69aa58a1545171ad4dd9fd9f9b8d2d927cb7e301aa1be3b22c12be1eadf2cc8"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.795876 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.797154 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" event={"ID":"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0","Type":"ContainerStarted","Data":"b3ddc747d7f38889268a5486cf9546baebfacc80f51b4ad20d6814251a166e44"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.797183 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" event={"ID":"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0","Type":"ContainerStarted","Data":"23de51ed6aec4c15fb52121129457bbebf7f4dcf36f0b25afdbf46e9426fd226"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.798145 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.798231 4832 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-k4lgm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.798263 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" podUID="944af4a5-8c80-4d24-8f2e-ead3cf864aa9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.801189 4832 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-94sbp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.801235 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" podUID="c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.802413 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" event={"ID":"3a7956db-4568-4aa9-806f-b6eb458a1562","Type":"ContainerStarted","Data":"672d64002a6d3f33b8f6cd7e15ca6d4cd7af19f999f33c42eb0e321b55e3160f"} Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.809390 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh"] Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.810963 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.819048 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.821836 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3b780f08-8776-4bb1-98c4-c119a0d109df-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q7k6g\" (UID: \"3b780f08-8776-4bb1-98c4-c119a0d109df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.847173 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:44 crc kubenswrapper[4832]: E0131 04:45:44.847343 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:45.347318714 +0000 UTC m=+154.296140399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.849204 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: E0131 04:45:44.851249 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:45.351222734 +0000 UTC m=+154.300044449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.862279 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.863180 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgwfk\" (UniqueName: \"kubernetes.io/projected/843a67ad-547e-4b69-9cce-e193d0da6f29-kube-api-access-hgwfk\") pod \"service-ca-operator-777779d784-tb2kx\" (UID: \"843a67ad-547e-4b69-9cce-e193d0da6f29\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.872189 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.877644 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm26q\" (UniqueName: \"kubernetes.io/projected/b0bbb98c-8414-4ec1-b718-02f5658451dc-kube-api-access-pm26q\") pod \"dns-operator-744455d44c-87hn6\" (UID: \"b0bbb98c-8414-4ec1-b718-02f5658451dc\") " pod="openshift-dns-operator/dns-operator-744455d44c-87hn6" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.878339 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.881654 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a6295e26-77d6-4897-9cbf-3ee03632c58d-auth-proxy-config\") pod \"machine-approver-56656f9798-f2qb5\" (UID: \"a6295e26-77d6-4897-9cbf-3ee03632c58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.882885 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a6295e26-77d6-4897-9cbf-3ee03632c58d-machine-approver-tls\") pod \"machine-approver-56656f9798-f2qb5\" (UID: \"a6295e26-77d6-4897-9cbf-3ee03632c58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.887355 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.894097 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnb8g\" (UniqueName: \"kubernetes.io/projected/830ed5a3-2ef6-4f41-b2a1-3c389ed95c29-kube-api-access-xnb8g\") pod \"multus-admission-controller-857f4d67dd-rggw8\" (UID: \"830ed5a3-2ef6-4f41-b2a1-3c389ed95c29\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rggw8" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.910788 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq5jf\" (UniqueName: \"kubernetes.io/projected/bc5aa137-eb42-474a-ac07-566db2485e11-kube-api-access-rq5jf\") pod \"machine-config-controller-84d6567774-ngvhm\" (UID: \"bc5aa137-eb42-474a-ac07-566db2485e11\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.917368 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rggw8" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.934934 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps6sd\" (UniqueName: \"kubernetes.io/projected/5797b6b7-298b-4e04-8945-0a733f37feaa-kube-api-access-ps6sd\") pod \"control-plane-machine-set-operator-78cbb6b69f-9kpwz\" (UID: \"5797b6b7-298b-4e04-8945-0a733f37feaa\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9kpwz" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.950736 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84t7x\" (UniqueName: \"kubernetes.io/projected/3c250428-30ed-4355-9fee-712f4471071c-kube-api-access-84t7x\") pod \"collect-profiles-29497245-nhr44\" (UID: \"3c250428-30ed-4355-9fee-712f4471071c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.951541 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:44 crc kubenswrapper[4832]: E0131 04:45:44.951849 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:45.45182044 +0000 UTC m=+154.400642125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.954875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:44 crc kubenswrapper[4832]: E0131 04:45:44.955261 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:45.455247046 +0000 UTC m=+154.404068731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.965917 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn5s8\" (UniqueName: \"kubernetes.io/projected/8e0615e3-499b-475a-b9f1-8f15e9706259-kube-api-access-tn5s8\") pod \"machine-config-operator-74547568cd-r5sxd\" (UID: \"8e0615e3-499b-475a-b9f1-8f15e9706259\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.985219 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9kwc\" (UniqueName: \"kubernetes.io/projected/db0f0a11-f2c4-4358-8a5a-f6f992f0efc7-kube-api-access-h9kwc\") pod \"dns-default-zz6xx\" (UID: \"db0f0a11-f2c4-4358-8a5a-f6f992f0efc7\") " pod="openshift-dns/dns-default-zz6xx" Jan 31 04:45:44 crc kubenswrapper[4832]: I0131 04:45:44.995936 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9kpwz" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.018108 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4hjw\" (UniqueName: \"kubernetes.io/projected/a6295e26-77d6-4897-9cbf-3ee03632c58d-kube-api-access-t4hjw\") pod \"machine-approver-56656f9798-f2qb5\" (UID: \"a6295e26-77d6-4897-9cbf-3ee03632c58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.019953 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.029898 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb2lm\" (UniqueName: \"kubernetes.io/projected/da610d4a-5f00-4de1-a770-9500e64624ed-kube-api-access-kb2lm\") pod \"csi-hostpathplugin-ch5bd\" (UID: \"da610d4a-5f00-4de1-a770-9500e64624ed\") " pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.031705 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.043989 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.049423 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59n2j\" (UniqueName: \"kubernetes.io/projected/305e29e9-933c-4098-a650-7d06eacb2ed6-kube-api-access-59n2j\") pod \"marketplace-operator-79b997595-vg9j4\" (UID: \"305e29e9-933c-4098-a650-7d06eacb2ed6\") " pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.056313 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:45 crc kubenswrapper[4832]: E0131 04:45:45.057364 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:45.557345097 +0000 UTC m=+154.506166782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.074830 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxkm7\" (UniqueName: \"kubernetes.io/projected/4c69c88d-cd93-44c1-9732-c307060907ec-kube-api-access-rxkm7\") pod \"olm-operator-6b444d44fb-c8qc5\" (UID: \"4c69c88d-cd93-44c1-9732-c307060907ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.086210 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zz6xx" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.089689 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhrm4\" (UniqueName: \"kubernetes.io/projected/1ac5ff82-55d0-4b2d-ab38-7c63c440d523-kube-api-access-mhrm4\") pod \"package-server-manager-789f6589d5-rg2nc\" (UID: \"1ac5ff82-55d0-4b2d-ab38-7c63c440d523\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.096328 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.107386 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ntcc\" (UniqueName: \"kubernetes.io/projected/df6a6161-bca0-4cca-8fa7-7c19a4cadf8e-kube-api-access-7ntcc\") pod \"migrator-59844c95c7-z98ht\" (UID: \"df6a6161-bca0-4cca-8fa7-7c19a4cadf8e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z98ht" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.134983 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8h9b\" (UniqueName: \"kubernetes.io/projected/7a579230-dc70-4599-bbf7-39825122f599-kube-api-access-t8h9b\") pod \"kube-storage-version-migrator-operator-b67b599dd-zfstn\" (UID: \"7a579230-dc70-4599-bbf7-39825122f599\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.136114 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-87hn6" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.143487 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwf7k\" (UniqueName: \"kubernetes.io/projected/648f7a8d-616d-4519-9131-aacc3768b09e-kube-api-access-gwf7k\") pod \"machine-config-server-wht2v\" (UID: \"648f7a8d-616d-4519-9131-aacc3768b09e\") " pod="openshift-machine-config-operator/machine-config-server-wht2v" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.159977 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:45 crc kubenswrapper[4832]: E0131 04:45:45.160404 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:45.660389318 +0000 UTC m=+154.609211003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.175694 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rwnl\" (UniqueName: \"kubernetes.io/projected/9b727897-88a9-4e96-9276-6c1df4780622-kube-api-access-8rwnl\") pod \"service-ca-9c57cc56f-48nqf\" (UID: \"9b727897-88a9-4e96-9276-6c1df4780622\") " pod="openshift-service-ca/service-ca-9c57cc56f-48nqf" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.196199 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.201460 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmklw\" (UniqueName: \"kubernetes.io/projected/6eb6b06e-b6b2-419d-8415-945f7e5cb2ac-kube-api-access-pmklw\") pod \"ingress-canary-dpgh5\" (UID: \"6eb6b06e-b6b2-419d-8415-945f7e5cb2ac\") " pod="openshift-ingress-canary/ingress-canary-dpgh5" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.212135 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.212526 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89460b6e-1b52-46cc-be8e-2615240712bc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9zg27\" (UID: \"89460b6e-1b52-46cc-be8e-2615240712bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.224267 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.247619 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghrw4\" (UniqueName: \"kubernetes.io/projected/b1263c5e-889e-4e8a-8413-4385286b66dd-kube-api-access-ghrw4\") pod \"packageserver-d55dfcdfc-47cvt\" (UID: \"b1263c5e-889e-4e8a-8413-4385286b66dd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.249980 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.256677 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2rzk\" (UniqueName: \"kubernetes.io/projected/ecec0f9d-d0ba-4ea2-8034-f8608a415488-kube-api-access-z2rzk\") pod \"catalog-operator-68c6474976-l7vcp\" (UID: \"ecec0f9d-d0ba-4ea2-8034-f8608a415488\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.263266 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-48nqf" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.263504 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:45 crc kubenswrapper[4832]: E0131 04:45:45.263785 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:45.76375899 +0000 UTC m=+154.712580675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.263916 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:45 crc kubenswrapper[4832]: E0131 04:45:45.264336 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:45.764320797 +0000 UTC m=+154.713142472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.269262 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.282851 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.308998 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z98ht" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.355911 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dpgh5" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.366050 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:45 crc kubenswrapper[4832]: E0131 04:45:45.366802 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:45.866776671 +0000 UTC m=+154.815598356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.367821 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wht2v" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.467601 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:45 crc kubenswrapper[4832]: E0131 04:45:45.468070 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:45.968054027 +0000 UTC m=+154.916875712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.500664 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g"] Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.503905 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.530965 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sjkqt"] Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.534244 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.536667 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg"] Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.539598 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.550318 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd"] Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.578233 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:45 crc kubenswrapper[4832]: E0131 04:45:45.578403 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:46.078381112 +0000 UTC m=+155.027202797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.579042 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:45 crc kubenswrapper[4832]: E0131 04:45:45.579381 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:46.079370953 +0000 UTC m=+155.028192638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:45 crc kubenswrapper[4832]: W0131 04:45:45.580790 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b780f08_8776_4bb1_98c4_c119a0d109df.slice/crio-aeb07a8bb79d0cd8532a7d3ac1f23643beee6f5ff824ee099d5ff246d404904c WatchSource:0}: Error finding container aeb07a8bb79d0cd8532a7d3ac1f23643beee6f5ff824ee099d5ff246d404904c: Status 404 returned error can't find the container with id aeb07a8bb79d0cd8532a7d3ac1f23643beee6f5ff824ee099d5ff246d404904c Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.610319 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9kpwz"] Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.612870 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rggw8"] Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.668944 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44"] Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.679078 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p"] Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.682151 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:45 crc kubenswrapper[4832]: E0131 04:45:45.682601 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:46.182554328 +0000 UTC m=+155.131376013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.682671 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:45 crc kubenswrapper[4832]: E0131 04:45:45.683063 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:46.183052694 +0000 UTC m=+155.131874389 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.789257 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:45 crc kubenswrapper[4832]: E0131 04:45:45.789795 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:46.289403256 +0000 UTC m=+155.238224941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.790130 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:45 crc kubenswrapper[4832]: E0131 04:45:45.790650 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:46.290629813 +0000 UTC m=+155.239451498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:45 crc kubenswrapper[4832]: W0131 04:45:45.809719 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c250428_30ed_4355_9fee_712f4471071c.slice/crio-0f6ec42cec5206ba8a7c8563cd0c3c64c4c9cea90d29e3d70f40bb6a21f6a19e WatchSource:0}: Error finding container 0f6ec42cec5206ba8a7c8563cd0c3c64c4c9cea90d29e3d70f40bb6a21f6a19e: Status 404 returned error can't find the container with id 0f6ec42cec5206ba8a7c8563cd0c3c64c4c9cea90d29e3d70f40bb6a21f6a19e Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.839120 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" event={"ID":"3b780f08-8776-4bb1-98c4-c119a0d109df","Type":"ContainerStarted","Data":"aeb07a8bb79d0cd8532a7d3ac1f23643beee6f5ff824ee099d5ff246d404904c"} Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.846691 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd"] Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.890579 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jkgd6" event={"ID":"06433b0e-798d-4bc9-90da-1a92bbc86acd","Type":"ContainerStarted","Data":"eef33debc9be5022f4e96dafeff99900ab9a359ce35240b313155b0130e3dbbe"} Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.890638 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.890649 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jkgd6" event={"ID":"06433b0e-798d-4bc9-90da-1a92bbc86acd","Type":"ContainerStarted","Data":"a4eb8d16da64c3a3134eb4cea2ad4a83e1d3a9e3178b465574331201ee803a46"} Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.897264 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" event={"ID":"5f6f9f13-7f9c-4ffa-b706-22c973d7c7af","Type":"ContainerStarted","Data":"3801e546faf80a65ce201a260e1318460f1b3ff492ed8a7a05279f33c17f2ba9"} Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.898133 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:45 crc kubenswrapper[4832]: E0131 04:45:45.898337 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:46.398309267 +0000 UTC m=+155.347130952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.898528 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:45 crc kubenswrapper[4832]: E0131 04:45:45.899008 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:46.398989448 +0000 UTC m=+155.347811133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.907093 4832 patch_prober.go:28] interesting pod/console-operator-58897d9998-jkgd6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.907156 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jkgd6" podUID="06433b0e-798d-4bc9-90da-1a92bbc86acd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.934171 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" event={"ID":"da3cda05-4158-4df2-93ab-6526af2232ba","Type":"ContainerStarted","Data":"d32d17882a8b16930b5302517c775635de518280bb3c38f46a96e67dce72cded"} Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.939266 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sjkqt" event={"ID":"70780eee-9367-4fce-923e-fc7b8ec0e88a","Type":"ContainerStarted","Data":"55c158667b42fb711dc200e8325a26d8635b76a5ac8667b76ba6e3fc335a3b9f"} Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.941781 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd" event={"ID":"1956cdaf-8b02-4782-b9cf-2329442f4236","Type":"ContainerStarted","Data":"3eca3c4116823c48eb19d652985e5773492fd1145bb5c5c89bf29d3f3ff26020"} Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.954790 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" event={"ID":"5645b412-ae27-4065-a722-a0823e0ade35","Type":"ContainerStarted","Data":"298b9d27df81494389b2b2fc0019cb7b4099f2dac3f2dabb2cba20c0fe4fbea8"} Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.958444 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" event={"ID":"c65562f0-ce98-4b36-aa5a-bbac4b1515fb","Type":"ContainerStarted","Data":"7c2fb6eb15f3280964d5bb058f6b0ea2441c24524fd585417cbca28f55694a4b"} Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.970960 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6" event={"ID":"bf49ca41-7e40-4d0e-860b-e9fcc6fba998","Type":"ContainerStarted","Data":"7bad51722076f96444b3bac1aed8934b61a24af439d404ccfce2811206ff9fd2"} Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.971035 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6" event={"ID":"bf49ca41-7e40-4d0e-860b-e9fcc6fba998","Type":"ContainerStarted","Data":"7c66d144a41767606daaac952da5c2bdde1587261e390554195915734275fa8d"} Jan 31 04:45:45 crc kubenswrapper[4832]: I0131 04:45:45.977704 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx"] Jan 31 04:45:45 crc kubenswrapper[4832]: W0131 04:45:45.988254 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod843a67ad_547e_4b69_9cce_e193d0da6f29.slice/crio-72305dfe1f3a41505e6b56fe7f69b326bd3c6aee4d52d99219be9f59b8f83cdf WatchSource:0}: Error finding container 72305dfe1f3a41505e6b56fe7f69b326bd3c6aee4d52d99219be9f59b8f83cdf: Status 404 returned error can't find the container with id 72305dfe1f3a41505e6b56fe7f69b326bd3c6aee4d52d99219be9f59b8f83cdf Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:45.997434 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5xpcm" event={"ID":"4994f771-6cff-4fcf-819e-9f3fcba71534","Type":"ContainerStarted","Data":"29477f7487412cbf291961ea0fe95cb234334695f3d4d3363d871c905b1b697c"} Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:45.997483 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5xpcm" event={"ID":"4994f771-6cff-4fcf-819e-9f3fcba71534","Type":"ContainerStarted","Data":"d837bc40167afc98f5ff1936fe504a0c84da31d8871d43dd2ab1de1f9f6d84c3"} Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.003222 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:46 crc kubenswrapper[4832]: E0131 04:45:46.004810 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:46.504783114 +0000 UTC m=+155.453604859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.005458 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" event={"ID":"f19cb7c8-32ec-458e-b342-c28b00bcca91","Type":"ContainerStarted","Data":"73909195efa2adecfacd9963d16c66387dd0961382a494dafa725b284e432517"} Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.005495 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" event={"ID":"f19cb7c8-32ec-458e-b342-c28b00bcca91","Type":"ContainerStarted","Data":"a350d15c2d91a51eb25dd97451b4f7644219e3cf045d3f0ba23ab0d66e073770"} Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.006304 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-tphpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.006344 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tphpp" podUID="577e3549-41e2-4af0-9b37-807d419dfbb9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.006442 4832 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fkwvm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.006471 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" podUID="d08c2681-edcf-4634-aede-63eb081e72a0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.021049 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ch5bd"] Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.038915 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zz6xx"] Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.065822 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn"] Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.105657 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:46 crc kubenswrapper[4832]: E0131 04:45:46.108015 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:46.60799934 +0000 UTC m=+155.556821255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:46 crc kubenswrapper[4832]: W0131 04:45:46.123934 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a579230_dc70_4599_bbf7_39825122f599.slice/crio-59068775d546722b04e3f3fa40de8a5f1b08e14902335db48ce29c4cafbeefbb WatchSource:0}: Error finding container 59068775d546722b04e3f3fa40de8a5f1b08e14902335db48ce29c4cafbeefbb: Status 404 returned error can't find the container with id 59068775d546722b04e3f3fa40de8a5f1b08e14902335db48ce29c4cafbeefbb Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.213144 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:46 crc kubenswrapper[4832]: E0131 04:45:46.214841 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:46.714819617 +0000 UTC m=+155.663641302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.238694 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-87hn6"] Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.274291 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.286487 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.316148 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:46 crc kubenswrapper[4832]: E0131 04:45:46.316699 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:46.816676701 +0000 UTC m=+155.765498466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.419150 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:46 crc kubenswrapper[4832]: E0131 04:45:46.419587 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:46.919549688 +0000 UTC m=+155.868371363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.439261 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-48nqf"] Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.439315 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-z98ht"] Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.485230 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vg9j4"] Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.516841 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm"] Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.521413 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:46 crc kubenswrapper[4832]: E0131 04:45:46.522058 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:47.021962789 +0000 UTC m=+155.970784474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:46 crc kubenswrapper[4832]: W0131 04:45:46.611995 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b727897_88a9_4e96_9276_6c1df4780622.slice/crio-27ba1028ac8260a624c40886000a674df7c23397f36094e2c3bed1e9250d950f WatchSource:0}: Error finding container 27ba1028ac8260a624c40886000a674df7c23397f36094e2c3bed1e9250d950f: Status 404 returned error can't find the container with id 27ba1028ac8260a624c40886000a674df7c23397f36094e2c3bed1e9250d950f Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.623355 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:46 crc kubenswrapper[4832]: E0131 04:45:46.626798 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:47.126763645 +0000 UTC m=+156.075585330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:46 crc kubenswrapper[4832]: W0131 04:45:46.656730 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf6a6161_bca0_4cca_8fa7_7c19a4cadf8e.slice/crio-cf4201f928845e72955acc7e53d28010e538be5856d20f2a9090111f3c8e08b7 WatchSource:0}: Error finding container cf4201f928845e72955acc7e53d28010e538be5856d20f2a9090111f3c8e08b7: Status 404 returned error can't find the container with id cf4201f928845e72955acc7e53d28010e538be5856d20f2a9090111f3c8e08b7 Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.685263 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc"] Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.687756 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp"] Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.726446 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt"] Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.727876 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:46 crc kubenswrapper[4832]: E0131 04:45:46.728235 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:47.228222857 +0000 UTC m=+156.177044542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:46 crc kubenswrapper[4832]: W0131 04:45:46.764184 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecec0f9d_d0ba_4ea2_8034_f8608a415488.slice/crio-37a4c30270ad4f0b29108261e24bf565a50e9f6022a3c4c4e572b3b5cc7c7200 WatchSource:0}: Error finding container 37a4c30270ad4f0b29108261e24bf565a50e9f6022a3c4c4e572b3b5cc7c7200: Status 404 returned error can't find the container with id 37a4c30270ad4f0b29108261e24bf565a50e9f6022a3c4c4e572b3b5cc7c7200 Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.781337 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dpgh5"] Jan 31 04:45:46 crc kubenswrapper[4832]: W0131 04:45:46.787688 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ac5ff82_55d0_4b2d_ab38_7c63c440d523.slice/crio-b951480989ccee1ffd29e60697354e13b144c5a0030702b1496848ec788c6426 WatchSource:0}: Error finding container b951480989ccee1ffd29e60697354e13b144c5a0030702b1496848ec788c6426: Status 404 returned error can't find the container with id b951480989ccee1ffd29e60697354e13b144c5a0030702b1496848ec788c6426 Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.820835 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27"] Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.824122 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5"] Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.829145 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:46 crc kubenswrapper[4832]: E0131 04:45:46.829447 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:47.329430082 +0000 UTC m=+156.278251767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.824641 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-vn2qs" podStartSLOduration=133.824619114 podStartE2EDuration="2m13.824619114s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:46.823054646 +0000 UTC m=+155.771876331" watchObservedRunningTime="2026-01-31 04:45:46.824619114 +0000 UTC m=+155.773440799" Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.863507 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tphpp" podStartSLOduration=133.863485897 podStartE2EDuration="2m13.863485897s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:46.862534207 +0000 UTC m=+155.811355912" watchObservedRunningTime="2026-01-31 04:45:46.863485897 +0000 UTC m=+155.812307582" Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.888374 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.894423 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.894490 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.904941 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-jkgd6" podStartSLOduration=133.904924738 podStartE2EDuration="2m13.904924738s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:46.904116583 +0000 UTC m=+155.852938268" watchObservedRunningTime="2026-01-31 04:45:46.904924738 +0000 UTC m=+155.853746423" Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.930673 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:46 crc kubenswrapper[4832]: E0131 04:45:46.931065 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:47.431044589 +0000 UTC m=+156.379866274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:46 crc kubenswrapper[4832]: I0131 04:45:46.946651 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f88dh" podStartSLOduration=133.946630467 podStartE2EDuration="2m13.946630467s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:46.943425439 +0000 UTC m=+155.892247154" watchObservedRunningTime="2026-01-31 04:45:46.946630467 +0000 UTC m=+155.895452152" Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.037238 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:47 crc kubenswrapper[4832]: E0131 04:45:47.037587 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:47.537551667 +0000 UTC m=+156.486373352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.069910 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-87hn6" event={"ID":"b0bbb98c-8414-4ec1-b718-02f5658451dc","Type":"ContainerStarted","Data":"fe4a3c7e1e9bdccbff3839412ad64652aa380a41085fd4ec8300ca9c821f67de"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.073404 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98tzb" podStartSLOduration=134.073388576 podStartE2EDuration="2m14.073388576s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:46.981400264 +0000 UTC m=+155.930221949" watchObservedRunningTime="2026-01-31 04:45:47.073388576 +0000 UTC m=+156.022210261" Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.083013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wht2v" event={"ID":"648f7a8d-616d-4519-9131-aacc3768b09e","Type":"ContainerStarted","Data":"06259b5ae94f22aade4dd4f9745be98630682fb9f1fa64e3a76fefaefba19d20"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.083081 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wht2v" event={"ID":"648f7a8d-616d-4519-9131-aacc3768b09e","Type":"ContainerStarted","Data":"7dbace189e307883cc6b486472319d9cec8a06b355b1d8af21087b67fbd46d94"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.084841 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zz6xx" event={"ID":"db0f0a11-f2c4-4358-8a5a-f6f992f0efc7","Type":"ContainerStarted","Data":"be93df0239f211977a4e71e4fbdcd2bb954e7f695349f40a05dfabb21583b92b"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.088083 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rggw8" event={"ID":"830ed5a3-2ef6-4f41-b2a1-3c389ed95c29","Type":"ContainerStarted","Data":"dfc39f2412c11f26d8fcc809c7381612aa28329e7b1f13d4a11b9bbb99bd954b"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.091996 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" event={"ID":"a6295e26-77d6-4897-9cbf-3ee03632c58d","Type":"ContainerStarted","Data":"193334c2f03fcf734173937ed17e7cafe8f8b0f555bfe47c5e51dfcbae708107"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.093050 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" event={"ID":"8e0615e3-499b-475a-b9f1-8f15e9706259","Type":"ContainerStarted","Data":"ab7007ab1058ee0675b8c31e9142b1b9d3dcb0650f9dcd0e1e4a11ffdd7d3b40"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.103633 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" podStartSLOduration=134.10352678 podStartE2EDuration="2m14.10352678s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:47.096809275 +0000 UTC m=+156.045630950" watchObservedRunningTime="2026-01-31 04:45:47.10352678 +0000 UTC m=+156.052348465" Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.104366 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5xpcm" podStartSLOduration=134.104359286 podStartE2EDuration="2m14.104359286s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:47.072955813 +0000 UTC m=+156.021777498" watchObservedRunningTime="2026-01-31 04:45:47.104359286 +0000 UTC m=+156.053180971" Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.108723 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn" event={"ID":"7a579230-dc70-4599-bbf7-39825122f599","Type":"ContainerStarted","Data":"59068775d546722b04e3f3fa40de8a5f1b08e14902335db48ce29c4cafbeefbb"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.119907 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9kpwz" event={"ID":"5797b6b7-298b-4e04-8945-0a733f37feaa","Type":"ContainerStarted","Data":"1fb9c02b4e7e46d3334763c83777103c4ee924568888da4bf084b8e618497c5e"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.119955 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9kpwz" event={"ID":"5797b6b7-298b-4e04-8945-0a733f37feaa","Type":"ContainerStarted","Data":"cda1ec16a1a039274a832e52f6d59621d13f078454c047f14647ef375c8e63a6"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.138600 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:47 crc kubenswrapper[4832]: E0131 04:45:47.139050 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:47.63903224 +0000 UTC m=+156.587853925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.147204 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-vs629" podStartSLOduration=134.14717983 podStartE2EDuration="2m14.14717983s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:47.1204743 +0000 UTC m=+156.069295985" watchObservedRunningTime="2026-01-31 04:45:47.14717983 +0000 UTC m=+156.096001515" Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.147912 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" podStartSLOduration=134.147906842 podStartE2EDuration="2m14.147906842s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:47.145280101 +0000 UTC m=+156.094101786" watchObservedRunningTime="2026-01-31 04:45:47.147906842 +0000 UTC m=+156.096728527" Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.155535 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" event={"ID":"305e29e9-933c-4098-a650-7d06eacb2ed6","Type":"ContainerStarted","Data":"8061de41dc6d487abcc22f2cb141011c6d238cc2aee2d3978b5621574666c027"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.157522 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" event={"ID":"ecec0f9d-d0ba-4ea2-8034-f8608a415488","Type":"ContainerStarted","Data":"37a4c30270ad4f0b29108261e24bf565a50e9f6022a3c4c4e572b3b5cc7c7200"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.158686 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" event={"ID":"b1263c5e-889e-4e8a-8413-4385286b66dd","Type":"ContainerStarted","Data":"7f1c993a634f41680170500b93a9d60bc207c96c9450d4cbb27cba8477f4a747"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.159974 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" event={"ID":"3b780f08-8776-4bb1-98c4-c119a0d109df","Type":"ContainerStarted","Data":"8d5712a1c7605566c2f26859cc8c1a49b0ae5891789b94da92129cfc3f645cf6"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.164780 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" event={"ID":"5645b412-ae27-4065-a722-a0823e0ade35","Type":"ContainerStarted","Data":"b52c7f30a0df81094d7c4e5421b91427829241d4574ac745708b65cfcfa808e8"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.165455 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-48nqf" event={"ID":"9b727897-88a9-4e96-9276-6c1df4780622","Type":"ContainerStarted","Data":"27ba1028ac8260a624c40886000a674df7c23397f36094e2c3bed1e9250d950f"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.166444 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx" event={"ID":"843a67ad-547e-4b69-9cce-e193d0da6f29","Type":"ContainerStarted","Data":"72305dfe1f3a41505e6b56fe7f69b326bd3c6aee4d52d99219be9f59b8f83cdf"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.170001 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" event={"ID":"3c250428-30ed-4355-9fee-712f4471071c","Type":"ContainerStarted","Data":"0f6ec42cec5206ba8a7c8563cd0c3c64c4c9cea90d29e3d70f40bb6a21f6a19e"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.171225 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p" event={"ID":"6201a1f3-1e3f-43fa-9299-14f4d7d6ac02","Type":"ContainerStarted","Data":"419325cd28da031a98d28eddaf343f9c55f49f014dc97a92de616f6f37caedbb"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.172552 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z98ht" event={"ID":"df6a6161-bca0-4cca-8fa7-7c19a4cadf8e","Type":"ContainerStarted","Data":"cf4201f928845e72955acc7e53d28010e538be5856d20f2a9090111f3c8e08b7"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.174133 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" event={"ID":"da610d4a-5f00-4de1-a770-9500e64624ed","Type":"ContainerStarted","Data":"e8143565a88a6f3ee8d13240bc182aa84692c68acc1e6796a50baa1d05fcc239"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.176271 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" event={"ID":"4c69c88d-cd93-44c1-9732-c307060907ec","Type":"ContainerStarted","Data":"30ee6768adaf956d6bf4c2e152f275997fc369b61f89cdf9e8943310893b7c65"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.179042 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dpgh5" event={"ID":"6eb6b06e-b6b2-419d-8415-945f7e5cb2ac","Type":"ContainerStarted","Data":"6648e08aefa74b4327deb682022a77beb0f01c7edc3f0110169e456b87593a20"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.181795 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x6bgt" podStartSLOduration=135.181783981 podStartE2EDuration="2m15.181783981s" podCreationTimestamp="2026-01-31 04:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:47.179709318 +0000 UTC m=+156.128531013" watchObservedRunningTime="2026-01-31 04:45:47.181783981 +0000 UTC m=+156.130605666" Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.181938 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27" event={"ID":"89460b6e-1b52-46cc-be8e-2615240712bc","Type":"ContainerStarted","Data":"4d4d550653a8cb4bc2a7675089f000e4d70d0226d06372fcca0a9a09da8197bc"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.183668 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm" event={"ID":"bc5aa137-eb42-474a-ac07-566db2485e11","Type":"ContainerStarted","Data":"144a3b1931d3dd4f077e708a1a71763d9fa5d268891df088d6b55f10f07c1029"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.185320 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc" event={"ID":"1ac5ff82-55d0-4b2d-ab38-7c63c440d523","Type":"ContainerStarted","Data":"b951480989ccee1ffd29e60697354e13b144c5a0030702b1496848ec788c6426"} Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.221305 4832 patch_prober.go:28] interesting pod/console-operator-58897d9998-jkgd6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.222347 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jkgd6" podUID="06433b0e-798d-4bc9-90da-1a92bbc86acd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.239943 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:47 crc kubenswrapper[4832]: E0131 04:45:47.240615 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:47.740588725 +0000 UTC m=+156.689410420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.240723 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:47 crc kubenswrapper[4832]: E0131 04:45:47.241180 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:47.741170623 +0000 UTC m=+156.689992308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.248954 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.271652 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" podStartSLOduration=135.271417931 podStartE2EDuration="2m15.271417931s" podCreationTimestamp="2026-01-31 04:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:47.239996468 +0000 UTC m=+156.188818163" watchObservedRunningTime="2026-01-31 04:45:47.271417931 +0000 UTC m=+156.220239636" Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.274165 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-gxndz" podStartSLOduration=134.274133924 podStartE2EDuration="2m14.274133924s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:47.263148227 +0000 UTC m=+156.211969912" watchObservedRunningTime="2026-01-31 04:45:47.274133924 +0000 UTC m=+156.222955609" Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.309283 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-drwq6" podStartSLOduration=134.309259452 podStartE2EDuration="2m14.309259452s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:47.307416525 +0000 UTC m=+156.256238210" watchObservedRunningTime="2026-01-31 04:45:47.309259452 +0000 UTC m=+156.258081137" Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.342267 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:47 crc kubenswrapper[4832]: E0131 04:45:47.342418 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:47.842397359 +0000 UTC m=+156.791219044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.343047 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:47 crc kubenswrapper[4832]: E0131 04:45:47.351528 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:47.851504609 +0000 UTC m=+156.800326294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.410454 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" podStartSLOduration=134.410430956 podStartE2EDuration="2m14.410430956s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:47.384400118 +0000 UTC m=+156.333221803" watchObservedRunningTime="2026-01-31 04:45:47.410430956 +0000 UTC m=+156.359252641" Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.428734 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wht2v" podStartSLOduration=5.428707807 podStartE2EDuration="5.428707807s" podCreationTimestamp="2026-01-31 04:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:47.420413462 +0000 UTC m=+156.369235147" watchObservedRunningTime="2026-01-31 04:45:47.428707807 +0000 UTC m=+156.377529492" Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.445254 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:47 crc kubenswrapper[4832]: E0131 04:45:47.445789 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:47.94576931 +0000 UTC m=+156.894590995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.561106 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:47 crc kubenswrapper[4832]: E0131 04:45:47.561621 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:48.061558973 +0000 UTC m=+157.010380658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.663265 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:47 crc kubenswrapper[4832]: E0131 04:45:47.663634 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:48.163595212 +0000 UTC m=+157.112416897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.764651 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:47 crc kubenswrapper[4832]: E0131 04:45:47.765044 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:48.265027434 +0000 UTC m=+157.213849119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.872342 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:47 crc kubenswrapper[4832]: E0131 04:45:47.872480 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:48.372457021 +0000 UTC m=+157.321278706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.873799 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:47 crc kubenswrapper[4832]: E0131 04:45:47.874187 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:48.374176223 +0000 UTC m=+157.322997908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.890333 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.890386 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 31 04:45:47 crc kubenswrapper[4832]: I0131 04:45:47.975477 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:47 crc kubenswrapper[4832]: E0131 04:45:47.975990 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:48.475974166 +0000 UTC m=+157.424795851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.078367 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:48 crc kubenswrapper[4832]: E0131 04:45:48.078953 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:48.578936435 +0000 UTC m=+157.527758120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.133107 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.133174 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.135804 4832 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-6zwmk container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.135859 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" podUID="c65562f0-ce98-4b36-aa5a-bbac4b1515fb" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.180075 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:48 crc kubenswrapper[4832]: E0131 04:45:48.180393 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:48.680375847 +0000 UTC m=+157.629197532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.227626 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" event={"ID":"a6295e26-77d6-4897-9cbf-3ee03632c58d","Type":"ContainerStarted","Data":"30648c8d8d7d87ea0958cb60ba5b87aa9ba6c9d2be38f30f21593bf3b65ca253"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.264029 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" event={"ID":"ecec0f9d-d0ba-4ea2-8034-f8608a415488","Type":"ContainerStarted","Data":"a6391cb4047988e74488a666025ebb229f939539c94bd99315630b7a7e43df8e"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.279725 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sjkqt" event={"ID":"70780eee-9367-4fce-923e-fc7b8ec0e88a","Type":"ContainerStarted","Data":"a7f59be86ce32067a7da0d304c692b88a5d6f5d31d2f18ac5dd9c6a9d7e4f52f"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.281298 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:48 crc kubenswrapper[4832]: E0131 04:45:48.281700 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:48.781686624 +0000 UTC m=+157.730508309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.324080 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm" event={"ID":"bc5aa137-eb42-474a-ac07-566db2485e11","Type":"ContainerStarted","Data":"535b2a03d82669a93c91f5666aaa62421d358e086ecc39133ce8acbdd54ac48a"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.332755 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" event={"ID":"da3cda05-4158-4df2-93ab-6526af2232ba","Type":"ContainerStarted","Data":"5829989ba3f2039c6ec74e7385876923b21c7c6459d7492c9f47858abd39f5a0"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.340776 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.341240 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.343198 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-87hn6" event={"ID":"b0bbb98c-8414-4ec1-b718-02f5658451dc","Type":"ContainerStarted","Data":"9ea37619ab0457113583bf669390bb85df405dc3849f5fa5d61fca75978f2cae"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.344842 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dpgh5" event={"ID":"6eb6b06e-b6b2-419d-8415-945f7e5cb2ac","Type":"ContainerStarted","Data":"2c29265036f94d67125c273b0e4ec7f549a2a75a8fd0e194744dcca37e806828"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.346811 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27" event={"ID":"89460b6e-1b52-46cc-be8e-2615240712bc","Type":"ContainerStarted","Data":"5bb276d0d1c97902ab3e6b388b13bac8af84e8e5f50c0071292836baecc947fb"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.356793 4832 patch_prober.go:28] interesting pod/apiserver-76f77b778f-v6mt8 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.357294 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" podUID="da3cda05-4158-4df2-93ab-6526af2232ba" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.359668 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zz6xx" event={"ID":"db0f0a11-f2c4-4358-8a5a-f6f992f0efc7","Type":"ContainerStarted","Data":"1a3bbc15405f6ce9d3b44e18755d88a69ef1998e66b66bb1d3b3e85d99228e45"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.378445 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-sjkqt" podStartSLOduration=135.378420492 podStartE2EDuration="2m15.378420492s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:48.334799824 +0000 UTC m=+157.283621509" watchObservedRunningTime="2026-01-31 04:45:48.378420492 +0000 UTC m=+157.327242177" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.378687 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" event={"ID":"b1263c5e-889e-4e8a-8413-4385286b66dd","Type":"ContainerStarted","Data":"15966da317be469e9c2083cd8434a7b4725b4630e8403cfd1b7f7ec5cae3a4c5"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.378969 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" podStartSLOduration=135.378964809 podStartE2EDuration="2m15.378964809s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:48.377999309 +0000 UTC m=+157.326820994" watchObservedRunningTime="2026-01-31 04:45:48.378964809 +0000 UTC m=+157.327786494" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.379899 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.382131 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:48 crc kubenswrapper[4832]: E0131 04:45:48.383583 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:48.88356819 +0000 UTC m=+157.832389875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.386740 4832 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-47cvt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.386859 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" podUID="b1263c5e-889e-4e8a-8413-4385286b66dd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.410145 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" event={"ID":"305e29e9-933c-4098-a650-7d06eacb2ed6","Type":"ContainerStarted","Data":"321dd495da81547de5f9bf345f631af09d381f70394f2a266f352173ffa10968"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.415839 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.419853 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z98ht" event={"ID":"df6a6161-bca0-4cca-8fa7-7c19a4cadf8e","Type":"ContainerStarted","Data":"b3bd56f1477d7ddc63fb56ee378a19882da17850cca8608dc146d0a0bececd23"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.425930 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn" event={"ID":"7a579230-dc70-4599-bbf7-39825122f599","Type":"ContainerStarted","Data":"bfc241cccd1b45d5df20ac8a3f067c873c870f5b1624414af7d13ff61e7a12a9"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.426710 4832 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vg9j4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.426793 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" podUID="305e29e9-933c-4098-a650-7d06eacb2ed6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.437693 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc" event={"ID":"1ac5ff82-55d0-4b2d-ab38-7c63c440d523","Type":"ContainerStarted","Data":"4d39567e6e924bcb72d4c22f3b8a8ef6a4131efa6176e1bd691ecf7ce7fefabb"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.441433 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-48nqf" event={"ID":"9b727897-88a9-4e96-9276-6c1df4780622","Type":"ContainerStarted","Data":"75554c6beea7d2c6d94820db073fbd2f000f40d1df7dd27cb5851226c567f5d7"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.442740 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p" event={"ID":"6201a1f3-1e3f-43fa-9299-14f4d7d6ac02","Type":"ContainerStarted","Data":"fc7364813e6022b360a47510803aab6774130927ccf2574560440215b6ef3e8f"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.448989 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" event={"ID":"4c69c88d-cd93-44c1-9732-c307060907ec","Type":"ContainerStarted","Data":"0a793e3f9965d1c39fa544a5ad1c0132a82b3d76824e389cbc906d5e515102dc"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.449667 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.454189 4832 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-c8qc5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.454274 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" podUID="4c69c88d-cd93-44c1-9732-c307060907ec" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.463988 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-dpgh5" podStartSLOduration=6.463955546 podStartE2EDuration="6.463955546s" podCreationTimestamp="2026-01-31 04:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:48.454743794 +0000 UTC m=+157.403565479" watchObservedRunningTime="2026-01-31 04:45:48.463955546 +0000 UTC m=+157.412777231" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.465963 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd" event={"ID":"1956cdaf-8b02-4782-b9cf-2329442f4236","Type":"ContainerStarted","Data":"ce0cedbe0beb5123bbc6de1f80fbe80b3e690a7eb7e1f468f5a8b085a8235051"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.475547 4832 generic.go:334] "Generic (PLEG): container finished" podID="5645b412-ae27-4065-a722-a0823e0ade35" containerID="b52c7f30a0df81094d7c4e5421b91427829241d4574ac745708b65cfcfa808e8" exitCode=0 Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.475646 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" event={"ID":"5645b412-ae27-4065-a722-a0823e0ade35","Type":"ContainerDied","Data":"b52c7f30a0df81094d7c4e5421b91427829241d4574ac745708b65cfcfa808e8"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.493589 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:48 crc kubenswrapper[4832]: E0131 04:45:48.495365 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:48.995350889 +0000 UTC m=+157.944172564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.502628 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" event={"ID":"3b780f08-8776-4bb1-98c4-c119a0d109df","Type":"ContainerStarted","Data":"9939ea44f5a0e3a81ac697d9fae2150a5e88661f48df7c1f4bf2b5f3c3e6a231"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.505344 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx" event={"ID":"843a67ad-547e-4b69-9cce-e193d0da6f29","Type":"ContainerStarted","Data":"ab144823039f2ded963f0ed5c6f13612e0b6fbf944d0966f3781f7f12e859d62"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.522181 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9zg27" podStartSLOduration=135.522159112 podStartE2EDuration="2m15.522159112s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:48.517280462 +0000 UTC m=+157.466102147" watchObservedRunningTime="2026-01-31 04:45:48.522159112 +0000 UTC m=+157.470980797" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.539891 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.540250 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.547657 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" event={"ID":"3c250428-30ed-4355-9fee-712f4471071c","Type":"ContainerStarted","Data":"bbe82be41a5c757c99c249fb8da52bdd964f456dabf3879498a488e06ad1378e"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.592237 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" event={"ID":"8e0615e3-499b-475a-b9f1-8f15e9706259","Type":"ContainerStarted","Data":"0457e0b5fa83898b478d49aeabec08d28817fe40110f2369cd2028b508734baf"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.596320 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:48 crc kubenswrapper[4832]: E0131 04:45:48.597639 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:49.097613137 +0000 UTC m=+158.046434822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.617297 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rggw8" event={"ID":"830ed5a3-2ef6-4f41-b2a1-3c389ed95c29","Type":"ContainerStarted","Data":"87fcde3823ec99d93810537faf77f64881616687dedd60fe448d88e746efc44c"} Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.654461 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7k6g" podStartSLOduration=135.65443677 podStartE2EDuration="2m15.65443677s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:48.619367574 +0000 UTC m=+157.568189259" watchObservedRunningTime="2026-01-31 04:45:48.65443677 +0000 UTC m=+157.603258445" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.654599 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" podStartSLOduration=135.654594855 podStartE2EDuration="2m15.654594855s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:48.65182618 +0000 UTC m=+157.600647865" watchObservedRunningTime="2026-01-31 04:45:48.654594855 +0000 UTC m=+157.603416540" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.717723 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:48 crc kubenswrapper[4832]: E0131 04:45:48.719014 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:49.21899933 +0000 UTC m=+158.167821015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.741664 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-48nqf" podStartSLOduration=135.741640745 podStartE2EDuration="2m15.741640745s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:48.739781018 +0000 UTC m=+157.688602713" watchObservedRunningTime="2026-01-31 04:45:48.741640745 +0000 UTC m=+157.690462430" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.788236 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qfrnd" podStartSLOduration=135.788217425 podStartE2EDuration="2m15.788217425s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:48.787692638 +0000 UTC m=+157.736514323" watchObservedRunningTime="2026-01-31 04:45:48.788217425 +0000 UTC m=+157.737039110" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.839659 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:48 crc kubenswrapper[4832]: E0131 04:45:48.840211 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:49.340135747 +0000 UTC m=+158.288957432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.840712 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:48 crc kubenswrapper[4832]: E0131 04:45:48.841260 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:49.341249451 +0000 UTC m=+158.290071126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.852787 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tb2kx" podStartSLOduration=135.852756114 podStartE2EDuration="2m15.852756114s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:48.835818294 +0000 UTC m=+157.784639989" watchObservedRunningTime="2026-01-31 04:45:48.852756114 +0000 UTC m=+157.801577809" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.912091 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:45:48 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:45:48 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:45:48 crc kubenswrapper[4832]: healthz check failed Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.912160 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.931688 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zfstn" podStartSLOduration=135.931670335 podStartE2EDuration="2m15.931670335s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:48.893963778 +0000 UTC m=+157.842785463" watchObservedRunningTime="2026-01-31 04:45:48.931670335 +0000 UTC m=+157.880492020" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.932280 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" podStartSLOduration=135.932275003 podStartE2EDuration="2m15.932275003s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:48.931103737 +0000 UTC m=+157.879925422" watchObservedRunningTime="2026-01-31 04:45:48.932275003 +0000 UTC m=+157.881096688" Jan 31 04:45:48 crc kubenswrapper[4832]: I0131 04:45:48.943249 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:48 crc kubenswrapper[4832]: E0131 04:45:48.943650 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:49.443634882 +0000 UTC m=+158.392456567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.001481 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v8m5p" podStartSLOduration=136.001458356 podStartE2EDuration="2m16.001458356s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:48.967934177 +0000 UTC m=+157.916755862" watchObservedRunningTime="2026-01-31 04:45:49.001458356 +0000 UTC m=+157.950280041" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.008802 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" podStartSLOduration=136.00877199 podStartE2EDuration="2m16.00877199s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:48.998824635 +0000 UTC m=+157.947646340" watchObservedRunningTime="2026-01-31 04:45:49.00877199 +0000 UTC m=+157.957593675" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.034851 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9kpwz" podStartSLOduration=136.034827679 podStartE2EDuration="2m16.034827679s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:49.032447706 +0000 UTC m=+157.981269391" watchObservedRunningTime="2026-01-31 04:45:49.034827679 +0000 UTC m=+157.983649364" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.050131 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.050507 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" podStartSLOduration=49.05048692 podStartE2EDuration="49.05048692s" podCreationTimestamp="2026-01-31 04:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:49.050464239 +0000 UTC m=+157.999285924" watchObservedRunningTime="2026-01-31 04:45:49.05048692 +0000 UTC m=+157.999308605" Jan 31 04:45:49 crc kubenswrapper[4832]: E0131 04:45:49.050882 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:49.550864082 +0000 UTC m=+158.499685767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.151898 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:49 crc kubenswrapper[4832]: E0131 04:45:49.152141 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:49.652109938 +0000 UTC m=+158.600931663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.254425 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:49 crc kubenswrapper[4832]: E0131 04:45:49.254904 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:49.754884101 +0000 UTC m=+158.703705786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.355672 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:49 crc kubenswrapper[4832]: E0131 04:45:49.355893 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:49.855861709 +0000 UTC m=+158.804683394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.356387 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:49 crc kubenswrapper[4832]: E0131 04:45:49.356917 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:49.85690377 +0000 UTC m=+158.805725595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.458123 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:49 crc kubenswrapper[4832]: E0131 04:45:49.458330 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:49.958300381 +0000 UTC m=+158.907122066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.458427 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:49 crc kubenswrapper[4832]: E0131 04:45:49.458893 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:49.958876439 +0000 UTC m=+158.907698124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.559640 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:49 crc kubenswrapper[4832]: E0131 04:45:49.560154 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:50.060111454 +0000 UTC m=+159.008933139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.623916 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc" event={"ID":"1ac5ff82-55d0-4b2d-ab38-7c63c440d523","Type":"ContainerStarted","Data":"53ead29dcf8f393eb0aa67edbc12048517279dc318a050a2607dc2ce02e8a038"} Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.624069 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.626024 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zz6xx" event={"ID":"db0f0a11-f2c4-4358-8a5a-f6f992f0efc7","Type":"ContainerStarted","Data":"2fc1e6ceae5fc5b380fb154d68234e361f40c8667564589ba60cfee87457a2e0"} Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.627991 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rggw8" event={"ID":"830ed5a3-2ef6-4f41-b2a1-3c389ed95c29","Type":"ContainerStarted","Data":"2d3610203fc77b1854ea3e2bf9c3e23fff83dca35fecbe37097d2f607653ec55"} Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.629833 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" event={"ID":"5645b412-ae27-4065-a722-a0823e0ade35","Type":"ContainerStarted","Data":"fb847b95f05a0b22411e3a87c00de322ba5a4ae4e27511d2f438dc3093a95aea"} Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.629973 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.631334 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm" event={"ID":"bc5aa137-eb42-474a-ac07-566db2485e11","Type":"ContainerStarted","Data":"6af8316e444fb9236abf24bd64c0544dcf72cfc2925180073387a942aeb09e2c"} Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.633453 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" event={"ID":"a6295e26-77d6-4897-9cbf-3ee03632c58d","Type":"ContainerStarted","Data":"dd5502585d310ea12f0632e0664f0edf81ed1987239915a9aceee5d58cb0b009"} Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.636005 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" event={"ID":"8e0615e3-499b-475a-b9f1-8f15e9706259","Type":"ContainerStarted","Data":"69cdf53d96f4e2a92b1c8a74b2809bc19fd0b0bf9d9c05e6111cf4da35ea9adf"} Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.639190 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z98ht" event={"ID":"df6a6161-bca0-4cca-8fa7-7c19a4cadf8e","Type":"ContainerStarted","Data":"33b96e9639189b856e234b73de8c5552ab19b9758155353f56b99b045b9c1bc3"} Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.641068 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-87hn6" event={"ID":"b0bbb98c-8414-4ec1-b718-02f5658451dc","Type":"ContainerStarted","Data":"092c5ac7b75e94a4906fd6178299c8adbb3f582cb3e2d0704ac539cad4791578"} Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.641714 4832 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-47cvt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.641772 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" podUID="b1263c5e-889e-4e8a-8413-4385286b66dd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.642121 4832 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-c8qc5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.642608 4832 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vg9j4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.642625 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" podUID="4c69c88d-cd93-44c1-9732-c307060907ec" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.642659 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" podUID="305e29e9-933c-4098-a650-7d06eacb2ed6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.661761 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:49 crc kubenswrapper[4832]: E0131 04:45:49.662306 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:50.162287059 +0000 UTC m=+159.111108744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.669312 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc" podStartSLOduration=136.669284254 podStartE2EDuration="2m16.669284254s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:49.666631322 +0000 UTC m=+158.615453007" watchObservedRunningTime="2026-01-31 04:45:49.669284254 +0000 UTC m=+158.618105939" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.730890 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-87hn6" podStartSLOduration=136.730866723 podStartE2EDuration="2m16.730866723s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:49.698356875 +0000 UTC m=+158.647178560" watchObservedRunningTime="2026-01-31 04:45:49.730866723 +0000 UTC m=+158.679688408" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.763688 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:49 crc kubenswrapper[4832]: E0131 04:45:49.764900 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:50.264868636 +0000 UTC m=+159.213690351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.807869 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" podStartSLOduration=136.807826963 podStartE2EDuration="2m16.807826963s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:49.739063794 +0000 UTC m=+158.687885479" watchObservedRunningTime="2026-01-31 04:45:49.807826963 +0000 UTC m=+158.756648658" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.869871 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:49 crc kubenswrapper[4832]: E0131 04:45:49.870247 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:50.370231138 +0000 UTC m=+159.319052823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.877786 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" podStartSLOduration=136.877768269 podStartE2EDuration="2m16.877768269s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:49.876178601 +0000 UTC m=+158.825000286" watchObservedRunningTime="2026-01-31 04:45:49.877768269 +0000 UTC m=+158.826589954" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.879918 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f2qb5" podStartSLOduration=137.879910236 podStartE2EDuration="2m17.879910236s" podCreationTimestamp="2026-01-31 04:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:49.818759849 +0000 UTC m=+158.767581524" watchObservedRunningTime="2026-01-31 04:45:49.879910236 +0000 UTC m=+158.828731911" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.911231 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:45:49 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:45:49 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:45:49 crc kubenswrapper[4832]: healthz check failed Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.911306 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.967458 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-zz6xx" podStartSLOduration=7.9674363 podStartE2EDuration="7.9674363s" podCreationTimestamp="2026-01-31 04:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:49.962809148 +0000 UTC m=+158.911630853" watchObservedRunningTime="2026-01-31 04:45:49.9674363 +0000 UTC m=+158.916257985" Jan 31 04:45:49 crc kubenswrapper[4832]: I0131 04:45:49.971802 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:49 crc kubenswrapper[4832]: E0131 04:45:49.972355 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:50.472334501 +0000 UTC m=+159.421156186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.016291 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rggw8" podStartSLOduration=137.016273029 podStartE2EDuration="2m17.016273029s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:50.015915678 +0000 UTC m=+158.964737373" watchObservedRunningTime="2026-01-31 04:45:50.016273029 +0000 UTC m=+158.965094714" Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.074070 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.074497 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:50.574482544 +0000 UTC m=+159.523304229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.095363 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ngvhm" podStartSLOduration=137.095344635 podStartE2EDuration="2m17.095344635s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:50.083846402 +0000 UTC m=+159.032668087" watchObservedRunningTime="2026-01-31 04:45:50.095344635 +0000 UTC m=+159.044166320" Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.175521 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.175761 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:50.67572505 +0000 UTC m=+159.624546735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.175958 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.176312 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:50.676296338 +0000 UTC m=+159.625118023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.196480 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z98ht" podStartSLOduration=137.196463096 podStartE2EDuration="2m17.196463096s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:50.193508606 +0000 UTC m=+159.142330301" watchObservedRunningTime="2026-01-31 04:45:50.196463096 +0000 UTC m=+159.145284781" Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.261877 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5sxd" podStartSLOduration=137.261856572 podStartE2EDuration="2m17.261856572s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:50.258267902 +0000 UTC m=+159.207089597" watchObservedRunningTime="2026-01-31 04:45:50.261856572 +0000 UTC m=+159.210678257" Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.277104 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.277256 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:50.777230044 +0000 UTC m=+159.726051729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.277824 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.278209 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:50.778197654 +0000 UTC m=+159.727019339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.380012 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.380246 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:50.880222214 +0000 UTC m=+159.829043909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.380626 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.381042 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:50.881031839 +0000 UTC m=+159.829853524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.483399 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.483613 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:50.983582405 +0000 UTC m=+159.932404100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.483897 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.484494 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:50.984486123 +0000 UTC m=+159.933307808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.586079 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.586332 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.086280715 +0000 UTC m=+160.035102400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.586746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.587227 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.087215544 +0000 UTC m=+160.036037299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.649058 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" event={"ID":"da610d4a-5f00-4de1-a770-9500e64624ed","Type":"ContainerStarted","Data":"e423ed3661e5f6c2eb1bec7f0833550db65b629f5fb87d9d851a2ca5fc414612"} Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.650235 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-zz6xx" Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.650296 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.674237 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-l7vcp" Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.688365 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.688585 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.188538533 +0000 UTC m=+160.137360218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.688748 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.689102 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.18909518 +0000 UTC m=+160.137916865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.790061 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.790274 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.290245052 +0000 UTC m=+160.239066737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.790787 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.791425 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.291408108 +0000 UTC m=+160.240229793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.892471 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.892721 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.392690375 +0000 UTC m=+160.341512060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.892792 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.893191 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.393182381 +0000 UTC m=+160.342004066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.897703 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:45:50 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:45:50 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:45:50 crc kubenswrapper[4832]: healthz check failed Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.897767 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.994807 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.995036 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.495000504 +0000 UTC m=+160.443822199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:50 crc kubenswrapper[4832]: I0131 04:45:50.995212 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:50 crc kubenswrapper[4832]: E0131 04:45:50.995691 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.495676025 +0000 UTC m=+160.444497920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.096839 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.097001 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.596974352 +0000 UTC m=+160.545796037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.097125 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.097445 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.597435337 +0000 UTC m=+160.546257022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.197972 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.198187 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.698155257 +0000 UTC m=+160.646976942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.198239 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.198692 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.698671762 +0000 UTC m=+160.647493447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.298874 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.299140 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.799095503 +0000 UTC m=+160.747917198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.299278 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.299843 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.799828415 +0000 UTC m=+160.748650280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.400853 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.401086 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.901057641 +0000 UTC m=+160.849879326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.401280 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.401606 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:51.901599208 +0000 UTC m=+160.850420893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.502313 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.502586 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.002532294 +0000 UTC m=+160.951353979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.503011 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.503197 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.003182554 +0000 UTC m=+160.952004239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.605401 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.605658 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.105626427 +0000 UTC m=+161.054448112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.605970 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.606372 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.106355889 +0000 UTC m=+161.055177574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.649469 4832 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-47cvt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.649550 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" podUID="b1263c5e-889e-4e8a-8413-4385286b66dd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.655068 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" event={"ID":"da610d4a-5f00-4de1-a770-9500e64624ed","Type":"ContainerStarted","Data":"2cad4ec98c8a15b08955f11e3c0372c2ea653717afe87ad8d9f18255479b18a4"} Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.707270 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.707507 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.207453521 +0000 UTC m=+161.156275206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.707731 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.708106 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.208090161 +0000 UTC m=+161.156911846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.809226 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.810260 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.310232834 +0000 UTC m=+161.259054519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.811037 4832 csr.go:261] certificate signing request csr-zq877 is approved, waiting to be issued Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.824322 4832 csr.go:257] certificate signing request csr-zq877 is issued Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.899831 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:45:51 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:45:51 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:45:51 crc kubenswrapper[4832]: healthz check failed Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.899888 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:45:51 crc kubenswrapper[4832]: I0131 04:45:51.912314 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:51 crc kubenswrapper[4832]: E0131 04:45:51.912841 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.412825911 +0000 UTC m=+161.361647596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.013782 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:52 crc kubenswrapper[4832]: E0131 04:45:52.013945 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.513919752 +0000 UTC m=+161.462741437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.014274 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:52 crc kubenswrapper[4832]: E0131 04:45:52.014662 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.514647925 +0000 UTC m=+161.463469610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.115356 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:52 crc kubenswrapper[4832]: E0131 04:45:52.115484 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.615442447 +0000 UTC m=+161.564264132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.115679 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:52 crc kubenswrapper[4832]: E0131 04:45:52.116038 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.616030385 +0000 UTC m=+161.564852070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.216453 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:52 crc kubenswrapper[4832]: E0131 04:45:52.217157 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.717138167 +0000 UTC m=+161.665959852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.318441 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:52 crc kubenswrapper[4832]: E0131 04:45:52.318865 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.818849657 +0000 UTC m=+161.767671342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.346465 4832 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.420348 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:52 crc kubenswrapper[4832]: E0131 04:45:52.420643 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.920621199 +0000 UTC m=+161.869442894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.420787 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:52 crc kubenswrapper[4832]: E0131 04:45:52.421182 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:52.921171927 +0000 UTC m=+161.869993612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.492674 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5sr9c"] Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.493954 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.498539 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.509226 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5sr9c"] Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.523166 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:52 crc kubenswrapper[4832]: E0131 04:45:52.523549 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:53.023534576 +0000 UTC m=+161.972356261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.624531 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-utilities\") pod \"certified-operators-5sr9c\" (UID: \"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5\") " pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.624604 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-catalog-content\") pod \"certified-operators-5sr9c\" (UID: \"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5\") " pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.624842 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j45hs\" (UniqueName: \"kubernetes.io/projected/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-kube-api-access-j45hs\") pod \"certified-operators-5sr9c\" (UID: \"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5\") " pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.624983 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:52 crc kubenswrapper[4832]: E0131 04:45:52.625359 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:53.12534377 +0000 UTC m=+162.074165455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.662521 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" event={"ID":"da610d4a-5f00-4de1-a770-9500e64624ed","Type":"ContainerStarted","Data":"f68c17f8d7fd160cac90fdbd91f84ec584f845e4a652dc64c057a0422c3a608e"} Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.662832 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" event={"ID":"da610d4a-5f00-4de1-a770-9500e64624ed","Type":"ContainerStarted","Data":"5b1ec5d12cee6ceeec15a0f03319fb3906f98cb8a2dfc5f05c567bfafdb2d063"} Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.664320 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c250428-30ed-4355-9fee-712f4471071c" containerID="bbe82be41a5c757c99c249fb8da52bdd964f456dabf3879498a488e06ad1378e" exitCode=0 Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.664433 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" event={"ID":"3c250428-30ed-4355-9fee-712f4471071c","Type":"ContainerDied","Data":"bbe82be41a5c757c99c249fb8da52bdd964f456dabf3879498a488e06ad1378e"} Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.685341 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-ch5bd" podStartSLOduration=10.685312069 podStartE2EDuration="10.685312069s" podCreationTimestamp="2026-01-31 04:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:52.683847155 +0000 UTC m=+161.632668850" watchObservedRunningTime="2026-01-31 04:45:52.685312069 +0000 UTC m=+161.634133754" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.692497 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f26d5"] Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.693502 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.697071 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.714065 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f26d5"] Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.726031 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.726251 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-utilities\") pod \"certified-operators-5sr9c\" (UID: \"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5\") " pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.726333 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-catalog-content\") pod \"certified-operators-5sr9c\" (UID: \"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5\") " pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.726404 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j45hs\" (UniqueName: \"kubernetes.io/projected/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-kube-api-access-j45hs\") pod \"certified-operators-5sr9c\" (UID: \"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5\") " pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:45:52 crc kubenswrapper[4832]: E0131 04:45:52.726894 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:53.226875005 +0000 UTC m=+162.175696690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.727318 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-utilities\") pod \"certified-operators-5sr9c\" (UID: \"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5\") " pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.727541 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-catalog-content\") pod \"certified-operators-5sr9c\" (UID: \"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5\") " pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.759273 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j45hs\" (UniqueName: \"kubernetes.io/projected/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-kube-api-access-j45hs\") pod \"certified-operators-5sr9c\" (UID: \"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5\") " pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.807237 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.826389 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-31 04:40:51 +0000 UTC, rotation deadline is 2026-12-06 08:01:20.048533579 +0000 UTC Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.826459 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7419h15m27.222077969s for next certificate rotation Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.828256 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fadd223c-2d95-4429-be17-6f15be7dbbbc-catalog-content\") pod \"community-operators-f26d5\" (UID: \"fadd223c-2d95-4429-be17-6f15be7dbbbc\") " pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.828328 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fadd223c-2d95-4429-be17-6f15be7dbbbc-utilities\") pod \"community-operators-f26d5\" (UID: \"fadd223c-2d95-4429-be17-6f15be7dbbbc\") " pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.828401 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj49g\" (UniqueName: \"kubernetes.io/projected/fadd223c-2d95-4429-be17-6f15be7dbbbc-kube-api-access-nj49g\") pod \"community-operators-f26d5\" (UID: \"fadd223c-2d95-4429-be17-6f15be7dbbbc\") " pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.828488 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:52 crc kubenswrapper[4832]: E0131 04:45:52.828823 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:53.328810472 +0000 UTC m=+162.277632157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.896108 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:45:52 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:45:52 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:45:52 crc kubenswrapper[4832]: healthz check failed Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.896239 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.898285 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ns9df"] Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.901329 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.919291 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ns9df"] Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.935786 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.936244 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj49g\" (UniqueName: \"kubernetes.io/projected/fadd223c-2d95-4429-be17-6f15be7dbbbc-kube-api-access-nj49g\") pod \"community-operators-f26d5\" (UID: \"fadd223c-2d95-4429-be17-6f15be7dbbbc\") " pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.936354 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fadd223c-2d95-4429-be17-6f15be7dbbbc-catalog-content\") pod \"community-operators-f26d5\" (UID: \"fadd223c-2d95-4429-be17-6f15be7dbbbc\") " pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.936417 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fadd223c-2d95-4429-be17-6f15be7dbbbc-utilities\") pod \"community-operators-f26d5\" (UID: \"fadd223c-2d95-4429-be17-6f15be7dbbbc\") " pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.937168 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fadd223c-2d95-4429-be17-6f15be7dbbbc-utilities\") pod \"community-operators-f26d5\" (UID: \"fadd223c-2d95-4429-be17-6f15be7dbbbc\") " pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:45:52 crc kubenswrapper[4832]: E0131 04:45:52.937259 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 04:45:53.437236158 +0000 UTC m=+162.386057843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.938115 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fadd223c-2d95-4429-be17-6f15be7dbbbc-catalog-content\") pod \"community-operators-f26d5\" (UID: \"fadd223c-2d95-4429-be17-6f15be7dbbbc\") " pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:45:52 crc kubenswrapper[4832]: I0131 04:45:52.961244 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj49g\" (UniqueName: \"kubernetes.io/projected/fadd223c-2d95-4429-be17-6f15be7dbbbc-kube-api-access-nj49g\") pod \"community-operators-f26d5\" (UID: \"fadd223c-2d95-4429-be17-6f15be7dbbbc\") " pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.007324 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.038630 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfe18813-bbee-404e-a100-ea164dcd83ec-utilities\") pod \"certified-operators-ns9df\" (UID: \"cfe18813-bbee-404e-a100-ea164dcd83ec\") " pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.038730 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfbt7\" (UniqueName: \"kubernetes.io/projected/cfe18813-bbee-404e-a100-ea164dcd83ec-kube-api-access-mfbt7\") pod \"certified-operators-ns9df\" (UID: \"cfe18813-bbee-404e-a100-ea164dcd83ec\") " pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.038761 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfe18813-bbee-404e-a100-ea164dcd83ec-catalog-content\") pod \"certified-operators-ns9df\" (UID: \"cfe18813-bbee-404e-a100-ea164dcd83ec\") " pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.038782 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:53 crc kubenswrapper[4832]: E0131 04:45:53.039106 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 04:45:53.539090043 +0000 UTC m=+162.487911728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bj2zs" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.048086 4832 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-31T04:45:52.346496035Z","Handler":null,"Name":""} Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.071305 4832 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.071348 4832 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.097513 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dhxc2"] Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.098723 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.102884 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5sr9c"] Jan 31 04:45:53 crc kubenswrapper[4832]: W0131 04:45:53.110257 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod383f7aea_cd36_47b8_8a08_fbb8a60e9ab5.slice/crio-888ed1e38fa5f463b77074a38e1b6d11acdc8bf559c476e8502cafc32df85e0b WatchSource:0}: Error finding container 888ed1e38fa5f463b77074a38e1b6d11acdc8bf559c476e8502cafc32df85e0b: Status 404 returned error can't find the container with id 888ed1e38fa5f463b77074a38e1b6d11acdc8bf559c476e8502cafc32df85e0b Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.121679 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhxc2"] Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.152282 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.152478 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfe18813-bbee-404e-a100-ea164dcd83ec-utilities\") pod \"certified-operators-ns9df\" (UID: \"cfe18813-bbee-404e-a100-ea164dcd83ec\") " pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.152539 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfbt7\" (UniqueName: \"kubernetes.io/projected/cfe18813-bbee-404e-a100-ea164dcd83ec-kube-api-access-mfbt7\") pod \"certified-operators-ns9df\" (UID: \"cfe18813-bbee-404e-a100-ea164dcd83ec\") " pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.152580 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfe18813-bbee-404e-a100-ea164dcd83ec-catalog-content\") pod \"certified-operators-ns9df\" (UID: \"cfe18813-bbee-404e-a100-ea164dcd83ec\") " pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.153009 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfe18813-bbee-404e-a100-ea164dcd83ec-catalog-content\") pod \"certified-operators-ns9df\" (UID: \"cfe18813-bbee-404e-a100-ea164dcd83ec\") " pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.153341 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfe18813-bbee-404e-a100-ea164dcd83ec-utilities\") pod \"certified-operators-ns9df\" (UID: \"cfe18813-bbee-404e-a100-ea164dcd83ec\") " pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.160767 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.170719 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.181962 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zwmk" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.206148 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfbt7\" (UniqueName: \"kubernetes.io/projected/cfe18813-bbee-404e-a100-ea164dcd83ec-kube-api-access-mfbt7\") pod \"certified-operators-ns9df\" (UID: \"cfe18813-bbee-404e-a100-ea164dcd83ec\") " pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.234071 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.255959 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1980967b-05ed-409d-9774-a946cecfda9c-catalog-content\") pod \"community-operators-dhxc2\" (UID: \"1980967b-05ed-409d-9774-a946cecfda9c\") " pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.256107 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f77t\" (UniqueName: \"kubernetes.io/projected/1980967b-05ed-409d-9774-a946cecfda9c-kube-api-access-8f77t\") pod \"community-operators-dhxc2\" (UID: \"1980967b-05ed-409d-9774-a946cecfda9c\") " pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.257173 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.258158 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1980967b-05ed-409d-9774-a946cecfda9c-utilities\") pod \"community-operators-dhxc2\" (UID: \"1980967b-05ed-409d-9774-a946cecfda9c\") " pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.274766 4832 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.275429 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.353642 4832 patch_prober.go:28] interesting pod/apiserver-76f77b778f-v6mt8 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 04:45:53 crc kubenswrapper[4832]: [+]log ok Jan 31 04:45:53 crc kubenswrapper[4832]: [+]etcd ok Jan 31 04:45:53 crc kubenswrapper[4832]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 04:45:53 crc kubenswrapper[4832]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 04:45:53 crc kubenswrapper[4832]: [+]poststarthook/max-in-flight-filter ok Jan 31 04:45:53 crc kubenswrapper[4832]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 04:45:53 crc kubenswrapper[4832]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 31 04:45:53 crc kubenswrapper[4832]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 31 04:45:53 crc kubenswrapper[4832]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 31 04:45:53 crc kubenswrapper[4832]: [+]poststarthook/project.openshift.io-projectcache ok Jan 31 04:45:53 crc kubenswrapper[4832]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 31 04:45:53 crc kubenswrapper[4832]: [+]poststarthook/openshift.io-startinformers ok Jan 31 04:45:53 crc kubenswrapper[4832]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 31 04:45:53 crc kubenswrapper[4832]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 04:45:53 crc kubenswrapper[4832]: livez check failed Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.353782 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" podUID="da3cda05-4158-4df2-93ab-6526af2232ba" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.362259 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1980967b-05ed-409d-9774-a946cecfda9c-utilities\") pod \"community-operators-dhxc2\" (UID: \"1980967b-05ed-409d-9774-a946cecfda9c\") " pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.362741 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1980967b-05ed-409d-9774-a946cecfda9c-utilities\") pod \"community-operators-dhxc2\" (UID: \"1980967b-05ed-409d-9774-a946cecfda9c\") " pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.362812 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1980967b-05ed-409d-9774-a946cecfda9c-catalog-content\") pod \"community-operators-dhxc2\" (UID: \"1980967b-05ed-409d-9774-a946cecfda9c\") " pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.362893 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f77t\" (UniqueName: \"kubernetes.io/projected/1980967b-05ed-409d-9774-a946cecfda9c-kube-api-access-8f77t\") pod \"community-operators-dhxc2\" (UID: \"1980967b-05ed-409d-9774-a946cecfda9c\") " pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.363365 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1980967b-05ed-409d-9774-a946cecfda9c-catalog-content\") pod \"community-operators-dhxc2\" (UID: \"1980967b-05ed-409d-9774-a946cecfda9c\") " pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.397744 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f77t\" (UniqueName: \"kubernetes.io/projected/1980967b-05ed-409d-9774-a946cecfda9c-kube-api-access-8f77t\") pod \"community-operators-dhxc2\" (UID: \"1980967b-05ed-409d-9774-a946cecfda9c\") " pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.431975 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.437808 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bj2zs\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.530762 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f26d5"] Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.553503 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.589161 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-tphpp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.589226 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tphpp" podUID="577e3549-41e2-4af0-9b37-807d419dfbb9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.589412 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-tphpp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.589432 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tphpp" podUID="577e3549-41e2-4af0-9b37-807d419dfbb9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.679454 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f26d5" event={"ID":"fadd223c-2d95-4429-be17-6f15be7dbbbc","Type":"ContainerStarted","Data":"594181eac747b298e40d2e7fbedf0cdd5201f9f30aa359875588ede64b420315"} Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.682576 4832 generic.go:334] "Generic (PLEG): container finished" podID="383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" containerID="b95ae123252ef8d2c53065b2e3f18618ef7ed93e8e1b76e980dcf18f1dc5456a" exitCode=0 Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.682746 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sr9c" event={"ID":"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5","Type":"ContainerDied","Data":"b95ae123252ef8d2c53065b2e3f18618ef7ed93e8e1b76e980dcf18f1dc5456a"} Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.687702 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sr9c" event={"ID":"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5","Type":"ContainerStarted","Data":"888ed1e38fa5f463b77074a38e1b6d11acdc8bf559c476e8502cafc32df85e0b"} Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.695900 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.697106 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.700259 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.701586 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.701637 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.706068 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.784926 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dhxc2"] Jan 31 04:45:53 crc kubenswrapper[4832]: W0131 04:45:53.799524 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1980967b_05ed_409d_9774_a946cecfda9c.slice/crio-7f48aae9ae7933d0e072721f46ef17d5a758b4fe1bc64c4a07f8cd343471876d WatchSource:0}: Error finding container 7f48aae9ae7933d0e072721f46ef17d5a758b4fe1bc64c4a07f8cd343471876d: Status 404 returned error can't find the container with id 7f48aae9ae7933d0e072721f46ef17d5a758b4fe1bc64c4a07f8cd343471876d Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.827624 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m7pqg" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.847532 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bj2zs"] Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.850800 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ns9df"] Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.872110 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b701fb1e-3de6-4a54-9c06-3afca9a1cc3c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b701fb1e-3de6-4a54-9c06-3afca9a1cc3c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.872228 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b701fb1e-3de6-4a54-9c06-3afca9a1cc3c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b701fb1e-3de6-4a54-9c06-3afca9a1cc3c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.895161 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:45:53 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:45:53 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:45:53 crc kubenswrapper[4832]: healthz check failed Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.895309 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.906147 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.978431 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b701fb1e-3de6-4a54-9c06-3afca9a1cc3c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b701fb1e-3de6-4a54-9c06-3afca9a1cc3c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.978648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b701fb1e-3de6-4a54-9c06-3afca9a1cc3c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b701fb1e-3de6-4a54-9c06-3afca9a1cc3c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:45:53 crc kubenswrapper[4832]: I0131 04:45:53.978725 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b701fb1e-3de6-4a54-9c06-3afca9a1cc3c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b701fb1e-3de6-4a54-9c06-3afca9a1cc3c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.005878 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.013725 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b701fb1e-3de6-4a54-9c06-3afca9a1cc3c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b701fb1e-3de6-4a54-9c06-3afca9a1cc3c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.062373 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.183328 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c250428-30ed-4355-9fee-712f4471071c-secret-volume\") pod \"3c250428-30ed-4355-9fee-712f4471071c\" (UID: \"3c250428-30ed-4355-9fee-712f4471071c\") " Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.183400 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c250428-30ed-4355-9fee-712f4471071c-config-volume\") pod \"3c250428-30ed-4355-9fee-712f4471071c\" (UID: \"3c250428-30ed-4355-9fee-712f4471071c\") " Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.183529 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84t7x\" (UniqueName: \"kubernetes.io/projected/3c250428-30ed-4355-9fee-712f4471071c-kube-api-access-84t7x\") pod \"3c250428-30ed-4355-9fee-712f4471071c\" (UID: \"3c250428-30ed-4355-9fee-712f4471071c\") " Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.185392 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c250428-30ed-4355-9fee-712f4471071c-config-volume" (OuterVolumeSpecName: "config-volume") pod "3c250428-30ed-4355-9fee-712f4471071c" (UID: "3c250428-30ed-4355-9fee-712f4471071c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.189103 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c250428-30ed-4355-9fee-712f4471071c-kube-api-access-84t7x" (OuterVolumeSpecName: "kube-api-access-84t7x") pod "3c250428-30ed-4355-9fee-712f4471071c" (UID: "3c250428-30ed-4355-9fee-712f4471071c"). InnerVolumeSpecName "kube-api-access-84t7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.195132 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c250428-30ed-4355-9fee-712f4471071c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3c250428-30ed-4355-9fee-712f4471071c" (UID: "3c250428-30ed-4355-9fee-712f4471071c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.286039 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c250428-30ed-4355-9fee-712f4471071c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.286074 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c250428-30ed-4355-9fee-712f4471071c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.286084 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84t7x\" (UniqueName: \"kubernetes.io/projected/3c250428-30ed-4355-9fee-712f4471071c-kube-api-access-84t7x\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.352283 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 04:45:54 crc kubenswrapper[4832]: W0131 04:45:54.358905 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb701fb1e_3de6_4a54_9c06_3afca9a1cc3c.slice/crio-e2ad799fef23abe374a99cad26601d7c44639a14d2b0c0296177e037f526ffb1 WatchSource:0}: Error finding container e2ad799fef23abe374a99cad26601d7c44639a14d2b0c0296177e037f526ffb1: Status 404 returned error can't find the container with id e2ad799fef23abe374a99cad26601d7c44639a14d2b0c0296177e037f526ffb1 Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.482142 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4bzcs"] Jan 31 04:45:54 crc kubenswrapper[4832]: E0131 04:45:54.482695 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c250428-30ed-4355-9fee-712f4471071c" containerName="collect-profiles" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.482708 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c250428-30ed-4355-9fee-712f4471071c" containerName="collect-profiles" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.482843 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c250428-30ed-4355-9fee-712f4471071c" containerName="collect-profiles" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.486178 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.488704 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.494159 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4bzcs"] Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.499573 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-jkgd6" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.589793 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c79b0fe-2283-47ce-a36d-800a09a3f29e-catalog-content\") pod \"redhat-marketplace-4bzcs\" (UID: \"1c79b0fe-2283-47ce-a36d-800a09a3f29e\") " pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.589868 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c49t\" (UniqueName: \"kubernetes.io/projected/1c79b0fe-2283-47ce-a36d-800a09a3f29e-kube-api-access-8c49t\") pod \"redhat-marketplace-4bzcs\" (UID: \"1c79b0fe-2283-47ce-a36d-800a09a3f29e\") " pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.590020 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c79b0fe-2283-47ce-a36d-800a09a3f29e-utilities\") pod \"redhat-marketplace-4bzcs\" (UID: \"1c79b0fe-2283-47ce-a36d-800a09a3f29e\") " pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.689888 4832 generic.go:334] "Generic (PLEG): container finished" podID="cfe18813-bbee-404e-a100-ea164dcd83ec" containerID="ad4876d71ccc76d6044ca0409dd9c74ed8c43d60f2a616ac88c1ec2669e48957" exitCode=0 Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.689963 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ns9df" event={"ID":"cfe18813-bbee-404e-a100-ea164dcd83ec","Type":"ContainerDied","Data":"ad4876d71ccc76d6044ca0409dd9c74ed8c43d60f2a616ac88c1ec2669e48957"} Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.689997 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ns9df" event={"ID":"cfe18813-bbee-404e-a100-ea164dcd83ec","Type":"ContainerStarted","Data":"5d65914a7639dcaaf64302de082c7bdd6a4ca6a74043d5d060c609268b9dd9f3"} Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.691794 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c79b0fe-2283-47ce-a36d-800a09a3f29e-catalog-content\") pod \"redhat-marketplace-4bzcs\" (UID: \"1c79b0fe-2283-47ce-a36d-800a09a3f29e\") " pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.691844 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c49t\" (UniqueName: \"kubernetes.io/projected/1c79b0fe-2283-47ce-a36d-800a09a3f29e-kube-api-access-8c49t\") pod \"redhat-marketplace-4bzcs\" (UID: \"1c79b0fe-2283-47ce-a36d-800a09a3f29e\") " pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.691935 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c79b0fe-2283-47ce-a36d-800a09a3f29e-utilities\") pod \"redhat-marketplace-4bzcs\" (UID: \"1c79b0fe-2283-47ce-a36d-800a09a3f29e\") " pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.692430 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c79b0fe-2283-47ce-a36d-800a09a3f29e-utilities\") pod \"redhat-marketplace-4bzcs\" (UID: \"1c79b0fe-2283-47ce-a36d-800a09a3f29e\") " pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.692935 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c79b0fe-2283-47ce-a36d-800a09a3f29e-catalog-content\") pod \"redhat-marketplace-4bzcs\" (UID: \"1c79b0fe-2283-47ce-a36d-800a09a3f29e\") " pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.699040 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.700037 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44" event={"ID":"3c250428-30ed-4355-9fee-712f4471071c","Type":"ContainerDied","Data":"0f6ec42cec5206ba8a7c8563cd0c3c64c4c9cea90d29e3d70f40bb6a21f6a19e"} Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.700092 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f6ec42cec5206ba8a7c8563cd0c3c64c4c9cea90d29e3d70f40bb6a21f6a19e" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.703199 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b701fb1e-3de6-4a54-9c06-3afca9a1cc3c","Type":"ContainerStarted","Data":"e2ad799fef23abe374a99cad26601d7c44639a14d2b0c0296177e037f526ffb1"} Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.704619 4832 generic.go:334] "Generic (PLEG): container finished" podID="1980967b-05ed-409d-9774-a946cecfda9c" containerID="0ca71dc4249a7e4d56db10712241781c78e32e8e1b61b57f033225c94171b07b" exitCode=0 Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.704769 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhxc2" event={"ID":"1980967b-05ed-409d-9774-a946cecfda9c","Type":"ContainerDied","Data":"0ca71dc4249a7e4d56db10712241781c78e32e8e1b61b57f033225c94171b07b"} Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.704790 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhxc2" event={"ID":"1980967b-05ed-409d-9774-a946cecfda9c","Type":"ContainerStarted","Data":"7f48aae9ae7933d0e072721f46ef17d5a758b4fe1bc64c4a07f8cd343471876d"} Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.706504 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" event={"ID":"0a2dfeb3-8dde-421d-9e1b-74cb967fb520","Type":"ContainerStarted","Data":"4d6fa1add6e16daed427610129aa24da23194340fdabefd76a31ba4837c2a173"} Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.706550 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" event={"ID":"0a2dfeb3-8dde-421d-9e1b-74cb967fb520","Type":"ContainerStarted","Data":"9eb125cb2c2a3ab664b779673eb2ddd758a258c2a4f0ce9210bee4defa6a7e8f"} Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.706712 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.708061 4832 generic.go:334] "Generic (PLEG): container finished" podID="fadd223c-2d95-4429-be17-6f15be7dbbbc" containerID="e0e1de7c46c907e873b65bfdd3a08083456a2d2d4601e36ede6b7e350ca209c4" exitCode=0 Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.708091 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f26d5" event={"ID":"fadd223c-2d95-4429-be17-6f15be7dbbbc","Type":"ContainerDied","Data":"e0e1de7c46c907e873b65bfdd3a08083456a2d2d4601e36ede6b7e350ca209c4"} Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.728471 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c49t\" (UniqueName: \"kubernetes.io/projected/1c79b0fe-2283-47ce-a36d-800a09a3f29e-kube-api-access-8c49t\") pod \"redhat-marketplace-4bzcs\" (UID: \"1c79b0fe-2283-47ce-a36d-800a09a3f29e\") " pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.765675 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" podStartSLOduration=141.76565336 podStartE2EDuration="2m21.76565336s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:54.763479484 +0000 UTC m=+163.712301179" watchObservedRunningTime="2026-01-31 04:45:54.76565336 +0000 UTC m=+163.714475045" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.811982 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.812057 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.813308 4832 patch_prober.go:28] interesting pod/console-f9d7485db-sjkqt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.813391 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sjkqt" podUID="70780eee-9367-4fce-923e-fc7b8ec0e88a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.815731 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.884130 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6spnx"] Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.885375 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.891243 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.904101 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:45:54 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:45:54 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:45:54 crc kubenswrapper[4832]: healthz check failed Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.904179 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:45:54 crc kubenswrapper[4832]: I0131 04:45:54.940771 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6spnx"] Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.003353 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab87f796-68ff-4e78-b131-813998e00539-utilities\") pod \"redhat-marketplace-6spnx\" (UID: \"ab87f796-68ff-4e78-b131-813998e00539\") " pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.003455 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm5qj\" (UniqueName: \"kubernetes.io/projected/ab87f796-68ff-4e78-b131-813998e00539-kube-api-access-fm5qj\") pod \"redhat-marketplace-6spnx\" (UID: \"ab87f796-68ff-4e78-b131-813998e00539\") " pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.003491 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab87f796-68ff-4e78-b131-813998e00539-catalog-content\") pod \"redhat-marketplace-6spnx\" (UID: \"ab87f796-68ff-4e78-b131-813998e00539\") " pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.105177 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab87f796-68ff-4e78-b131-813998e00539-utilities\") pod \"redhat-marketplace-6spnx\" (UID: \"ab87f796-68ff-4e78-b131-813998e00539\") " pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.105271 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm5qj\" (UniqueName: \"kubernetes.io/projected/ab87f796-68ff-4e78-b131-813998e00539-kube-api-access-fm5qj\") pod \"redhat-marketplace-6spnx\" (UID: \"ab87f796-68ff-4e78-b131-813998e00539\") " pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.105300 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab87f796-68ff-4e78-b131-813998e00539-catalog-content\") pod \"redhat-marketplace-6spnx\" (UID: \"ab87f796-68ff-4e78-b131-813998e00539\") " pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.105937 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab87f796-68ff-4e78-b131-813998e00539-utilities\") pod \"redhat-marketplace-6spnx\" (UID: \"ab87f796-68ff-4e78-b131-813998e00539\") " pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.106117 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab87f796-68ff-4e78-b131-813998e00539-catalog-content\") pod \"redhat-marketplace-6spnx\" (UID: \"ab87f796-68ff-4e78-b131-813998e00539\") " pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.131243 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm5qj\" (UniqueName: \"kubernetes.io/projected/ab87f796-68ff-4e78-b131-813998e00539-kube-api-access-fm5qj\") pod \"redhat-marketplace-6spnx\" (UID: \"ab87f796-68ff-4e78-b131-813998e00539\") " pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.215508 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.256063 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c8qc5" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.291953 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.299776 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4bzcs"] Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.509906 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6spnx"] Jan 31 04:45:55 crc kubenswrapper[4832]: W0131 04:45:55.533898 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab87f796_68ff_4e78_b131_813998e00539.slice/crio-20824852933ee24cdd78770dcd1244a015c4ce97729dd6a6727f4a5c5837515a WatchSource:0}: Error finding container 20824852933ee24cdd78770dcd1244a015c4ce97729dd6a6727f4a5c5837515a: Status 404 returned error can't find the container with id 20824852933ee24cdd78770dcd1244a015c4ce97729dd6a6727f4a5c5837515a Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.545199 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-47cvt" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.721573 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6spnx" event={"ID":"ab87f796-68ff-4e78-b131-813998e00539","Type":"ContainerStarted","Data":"20824852933ee24cdd78770dcd1244a015c4ce97729dd6a6727f4a5c5837515a"} Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.724244 4832 generic.go:334] "Generic (PLEG): container finished" podID="1c79b0fe-2283-47ce-a36d-800a09a3f29e" containerID="88932ca5f05cb7cba3f9e29ac1b8aadb341ec1741b8c30adde1e54f2bb3c4f4c" exitCode=0 Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.724803 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bzcs" event={"ID":"1c79b0fe-2283-47ce-a36d-800a09a3f29e","Type":"ContainerDied","Data":"88932ca5f05cb7cba3f9e29ac1b8aadb341ec1741b8c30adde1e54f2bb3c4f4c"} Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.724869 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bzcs" event={"ID":"1c79b0fe-2283-47ce-a36d-800a09a3f29e","Type":"ContainerStarted","Data":"f6fed4e35db88f547680cc1aa8e5a1cbd7320214f510c968891e5fbe134acd3d"} Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.737312 4832 generic.go:334] "Generic (PLEG): container finished" podID="b701fb1e-3de6-4a54-9c06-3afca9a1cc3c" containerID="b6e9d934fe75359d8b033d60289de64c9e459ff22f671c6200886b4a89b5cae1" exitCode=0 Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.737419 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b701fb1e-3de6-4a54-9c06-3afca9a1cc3c","Type":"ContainerDied","Data":"b6e9d934fe75359d8b033d60289de64c9e459ff22f671c6200886b4a89b5cae1"} Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.894104 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s577w"] Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.895312 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:45:55 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:45:55 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:45:55 crc kubenswrapper[4832]: healthz check failed Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.895397 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.896964 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.900749 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 04:45:55 crc kubenswrapper[4832]: I0131 04:45:55.903155 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s577w"] Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.022106 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h65qw\" (UniqueName: \"kubernetes.io/projected/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-kube-api-access-h65qw\") pod \"redhat-operators-s577w\" (UID: \"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8\") " pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.022170 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-utilities\") pod \"redhat-operators-s577w\" (UID: \"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8\") " pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.022277 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-catalog-content\") pod \"redhat-operators-s577w\" (UID: \"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8\") " pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.123940 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-catalog-content\") pod \"redhat-operators-s577w\" (UID: \"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8\") " pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.124066 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-utilities\") pod \"redhat-operators-s577w\" (UID: \"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8\") " pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.124092 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h65qw\" (UniqueName: \"kubernetes.io/projected/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-kube-api-access-h65qw\") pod \"redhat-operators-s577w\" (UID: \"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8\") " pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.124769 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-utilities\") pod \"redhat-operators-s577w\" (UID: \"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8\") " pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.125153 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-catalog-content\") pod \"redhat-operators-s577w\" (UID: \"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8\") " pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.157098 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h65qw\" (UniqueName: \"kubernetes.io/projected/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-kube-api-access-h65qw\") pod \"redhat-operators-s577w\" (UID: \"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8\") " pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.225888 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs\") pod \"network-metrics-daemon-rbg9h\" (UID: \"88205cd8-6bbf-40af-a0d1-bfae431d97e7\") " pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.233669 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88205cd8-6bbf-40af-a0d1-bfae431d97e7-metrics-certs\") pod \"network-metrics-daemon-rbg9h\" (UID: \"88205cd8-6bbf-40af-a0d1-bfae431d97e7\") " pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.239708 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.287217 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bdq5m"] Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.308543 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bdq5m"] Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.308853 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.398960 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.399730 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rbg9h" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.399818 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.404187 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.404502 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.430747 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wl4c\" (UniqueName: \"kubernetes.io/projected/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-kube-api-access-9wl4c\") pod \"redhat-operators-bdq5m\" (UID: \"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0\") " pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.431024 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-utilities\") pod \"redhat-operators-bdq5m\" (UID: \"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0\") " pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.431069 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-catalog-content\") pod \"redhat-operators-bdq5m\" (UID: \"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0\") " pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.452433 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.532453 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/592be45a-a98b-41aa-8f9e-1fb266fce236-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"592be45a-a98b-41aa-8f9e-1fb266fce236\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.532581 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wl4c\" (UniqueName: \"kubernetes.io/projected/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-kube-api-access-9wl4c\") pod \"redhat-operators-bdq5m\" (UID: \"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0\") " pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.532636 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-utilities\") pod \"redhat-operators-bdq5m\" (UID: \"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0\") " pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.533301 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-utilities\") pod \"redhat-operators-bdq5m\" (UID: \"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0\") " pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.533337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-catalog-content\") pod \"redhat-operators-bdq5m\" (UID: \"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0\") " pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.533605 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/592be45a-a98b-41aa-8f9e-1fb266fce236-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"592be45a-a98b-41aa-8f9e-1fb266fce236\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.533891 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-catalog-content\") pod \"redhat-operators-bdq5m\" (UID: \"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0\") " pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.567413 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wl4c\" (UniqueName: \"kubernetes.io/projected/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-kube-api-access-9wl4c\") pod \"redhat-operators-bdq5m\" (UID: \"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0\") " pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.636511 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/592be45a-a98b-41aa-8f9e-1fb266fce236-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"592be45a-a98b-41aa-8f9e-1fb266fce236\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.636648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/592be45a-a98b-41aa-8f9e-1fb266fce236-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"592be45a-a98b-41aa-8f9e-1fb266fce236\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.637043 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/592be45a-a98b-41aa-8f9e-1fb266fce236-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"592be45a-a98b-41aa-8f9e-1fb266fce236\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.656639 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/592be45a-a98b-41aa-8f9e-1fb266fce236-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"592be45a-a98b-41aa-8f9e-1fb266fce236\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.760763 4832 generic.go:334] "Generic (PLEG): container finished" podID="ab87f796-68ff-4e78-b131-813998e00539" containerID="d0970066870d0b5bdc276c3de456d10cc38f9105a607bf2cdd6144bf8d5b9ff3" exitCode=0 Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.761816 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6spnx" event={"ID":"ab87f796-68ff-4e78-b131-813998e00539","Type":"ContainerDied","Data":"d0970066870d0b5bdc276c3de456d10cc38f9105a607bf2cdd6144bf8d5b9ff3"} Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.784694 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.790668 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s577w"] Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.797499 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:45:56 crc kubenswrapper[4832]: W0131 04:45:56.819672 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cb7c4a8_ed79_44d9_87c6_e06ca68a70d8.slice/crio-70e06639d13f47a61cfcb795c4d4e3ff77ef0a8b20b0611d45ac7985078f99cc WatchSource:0}: Error finding container 70e06639d13f47a61cfcb795c4d4e3ff77ef0a8b20b0611d45ac7985078f99cc: Status 404 returned error can't find the container with id 70e06639d13f47a61cfcb795c4d4e3ff77ef0a8b20b0611d45ac7985078f99cc Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.892401 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:45:56 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:45:56 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:45:56 crc kubenswrapper[4832]: healthz check failed Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.892801 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:45:56 crc kubenswrapper[4832]: I0131 04:45:56.942918 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rbg9h"] Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.089448 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-zz6xx" Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.264801 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.324253 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.349484 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b701fb1e-3de6-4a54-9c06-3afca9a1cc3c-kube-api-access\") pod \"b701fb1e-3de6-4a54-9c06-3afca9a1cc3c\" (UID: \"b701fb1e-3de6-4a54-9c06-3afca9a1cc3c\") " Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.349784 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b701fb1e-3de6-4a54-9c06-3afca9a1cc3c-kubelet-dir\") pod \"b701fb1e-3de6-4a54-9c06-3afca9a1cc3c\" (UID: \"b701fb1e-3de6-4a54-9c06-3afca9a1cc3c\") " Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.350425 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b701fb1e-3de6-4a54-9c06-3afca9a1cc3c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b701fb1e-3de6-4a54-9c06-3afca9a1cc3c" (UID: "b701fb1e-3de6-4a54-9c06-3afca9a1cc3c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.365727 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b701fb1e-3de6-4a54-9c06-3afca9a1cc3c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b701fb1e-3de6-4a54-9c06-3afca9a1cc3c" (UID: "b701fb1e-3de6-4a54-9c06-3afca9a1cc3c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.370453 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bdq5m"] Jan 31 04:45:57 crc kubenswrapper[4832]: W0131 04:45:57.374895 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ffeb3e3_9f51_40a2_9c02_988853dc2bc0.slice/crio-818a3dae68c34c0cf50d0a5042699ecb7729f38128be2e15ed08f45ec22b0270 WatchSource:0}: Error finding container 818a3dae68c34c0cf50d0a5042699ecb7729f38128be2e15ed08f45ec22b0270: Status 404 returned error can't find the container with id 818a3dae68c34c0cf50d0a5042699ecb7729f38128be2e15ed08f45ec22b0270 Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.453021 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b701fb1e-3de6-4a54-9c06-3afca9a1cc3c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.453063 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b701fb1e-3de6-4a54-9c06-3afca9a1cc3c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.835459 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"592be45a-a98b-41aa-8f9e-1fb266fce236","Type":"ContainerStarted","Data":"7fe6198deaf314436f2750b3eebab091e7e601f129f898cdd38306cce0b61ced"} Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.847634 4832 generic.go:334] "Generic (PLEG): container finished" podID="8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" containerID="803033c2f403e649a7c341aaacb7718ae628887e544c113e75ccda036bd25330" exitCode=0 Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.847704 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdq5m" event={"ID":"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0","Type":"ContainerDied","Data":"803033c2f403e649a7c341aaacb7718ae628887e544c113e75ccda036bd25330"} Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.847738 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdq5m" event={"ID":"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0","Type":"ContainerStarted","Data":"818a3dae68c34c0cf50d0a5042699ecb7729f38128be2e15ed08f45ec22b0270"} Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.858649 4832 generic.go:334] "Generic (PLEG): container finished" podID="5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" containerID="c96f18ad60a46bb7ed00e9e9c8339511e1e2c6290a73f1ddc72fc00b070e9430" exitCode=0 Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.858748 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s577w" event={"ID":"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8","Type":"ContainerDied","Data":"c96f18ad60a46bb7ed00e9e9c8339511e1e2c6290a73f1ddc72fc00b070e9430"} Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.861508 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.873672 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s577w" event={"ID":"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8","Type":"ContainerStarted","Data":"70e06639d13f47a61cfcb795c4d4e3ff77ef0a8b20b0611d45ac7985078f99cc"} Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.873766 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b701fb1e-3de6-4a54-9c06-3afca9a1cc3c","Type":"ContainerDied","Data":"e2ad799fef23abe374a99cad26601d7c44639a14d2b0c0296177e037f526ffb1"} Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.873787 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ad799fef23abe374a99cad26601d7c44639a14d2b0c0296177e037f526ffb1" Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.894393 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" event={"ID":"88205cd8-6bbf-40af-a0d1-bfae431d97e7","Type":"ContainerStarted","Data":"43a074adb56b81d687baf54bc0249d3785530b7a7b428cf6f98f4357e79674ea"} Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.894452 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" event={"ID":"88205cd8-6bbf-40af-a0d1-bfae431d97e7","Type":"ContainerStarted","Data":"b2a21a8f81f79948ae3ed430eff7d032eaf4d59ad55ea04a626417dc942038f7"} Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.895460 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:45:57 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:45:57 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:45:57 crc kubenswrapper[4832]: healthz check failed Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.895600 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:45:57 crc kubenswrapper[4832]: I0131 04:45:57.981195 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:45:58 crc kubenswrapper[4832]: I0131 04:45:58.350179 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:58 crc kubenswrapper[4832]: I0131 04:45:58.357765 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-v6mt8" Jan 31 04:45:58 crc kubenswrapper[4832]: I0131 04:45:58.894061 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:45:58 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:45:58 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:45:58 crc kubenswrapper[4832]: healthz check failed Jan 31 04:45:58 crc kubenswrapper[4832]: I0131 04:45:58.894454 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:45:58 crc kubenswrapper[4832]: I0131 04:45:58.920364 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rbg9h" event={"ID":"88205cd8-6bbf-40af-a0d1-bfae431d97e7","Type":"ContainerStarted","Data":"f5dc289478eedf0519c11efb440b2c3ed7351c7b8b60086ebcf2c8ae492765ae"} Jan 31 04:45:58 crc kubenswrapper[4832]: I0131 04:45:58.932944 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"592be45a-a98b-41aa-8f9e-1fb266fce236","Type":"ContainerStarted","Data":"475a6b77ae041ef42678e0e83627ed0ba853f47fa05cb8b11169e4b2571739e5"} Jan 31 04:45:58 crc kubenswrapper[4832]: I0131 04:45:58.948367 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-rbg9h" podStartSLOduration=145.948318727 podStartE2EDuration="2m25.948318727s" podCreationTimestamp="2026-01-31 04:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:58.939513137 +0000 UTC m=+167.888334842" watchObservedRunningTime="2026-01-31 04:45:58.948318727 +0000 UTC m=+167.897140422" Jan 31 04:45:58 crc kubenswrapper[4832]: I0131 04:45:58.991128 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.99110841 podStartE2EDuration="2.99110841s" podCreationTimestamp="2026-01-31 04:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:58.982797255 +0000 UTC m=+167.931618940" watchObservedRunningTime="2026-01-31 04:45:58.99110841 +0000 UTC m=+167.939930095" Jan 31 04:45:59 crc kubenswrapper[4832]: I0131 04:45:59.891408 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:45:59 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:45:59 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:45:59 crc kubenswrapper[4832]: healthz check failed Jan 31 04:45:59 crc kubenswrapper[4832]: I0131 04:45:59.891491 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:45:59 crc kubenswrapper[4832]: I0131 04:45:59.946260 4832 generic.go:334] "Generic (PLEG): container finished" podID="592be45a-a98b-41aa-8f9e-1fb266fce236" containerID="475a6b77ae041ef42678e0e83627ed0ba853f47fa05cb8b11169e4b2571739e5" exitCode=0 Jan 31 04:45:59 crc kubenswrapper[4832]: I0131 04:45:59.946365 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"592be45a-a98b-41aa-8f9e-1fb266fce236","Type":"ContainerDied","Data":"475a6b77ae041ef42678e0e83627ed0ba853f47fa05cb8b11169e4b2571739e5"} Jan 31 04:46:00 crc kubenswrapper[4832]: I0131 04:46:00.892405 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:46:00 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:46:00 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:46:00 crc kubenswrapper[4832]: healthz check failed Jan 31 04:46:00 crc kubenswrapper[4832]: I0131 04:46:00.893373 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:46:01 crc kubenswrapper[4832]: I0131 04:46:01.890782 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:46:01 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:46:01 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:46:01 crc kubenswrapper[4832]: healthz check failed Jan 31 04:46:01 crc kubenswrapper[4832]: I0131 04:46:01.890879 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:46:02 crc kubenswrapper[4832]: I0131 04:46:02.896847 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:46:02 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:46:02 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:46:02 crc kubenswrapper[4832]: healthz check failed Jan 31 04:46:02 crc kubenswrapper[4832]: I0131 04:46:02.896924 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:46:03 crc kubenswrapper[4832]: I0131 04:46:03.605087 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tphpp" Jan 31 04:46:03 crc kubenswrapper[4832]: I0131 04:46:03.891054 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:46:03 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:46:03 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:46:03 crc kubenswrapper[4832]: healthz check failed Jan 31 04:46:03 crc kubenswrapper[4832]: I0131 04:46:03.891434 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:46:04 crc kubenswrapper[4832]: I0131 04:46:04.812911 4832 patch_prober.go:28] interesting pod/console-f9d7485db-sjkqt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 31 04:46:04 crc kubenswrapper[4832]: I0131 04:46:04.813019 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sjkqt" podUID="70780eee-9367-4fce-923e-fc7b8ec0e88a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 31 04:46:04 crc kubenswrapper[4832]: I0131 04:46:04.890775 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:46:04 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:46:04 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:46:04 crc kubenswrapper[4832]: healthz check failed Jan 31 04:46:04 crc kubenswrapper[4832]: I0131 04:46:04.890840 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:46:05 crc kubenswrapper[4832]: I0131 04:46:05.891523 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:46:05 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Jan 31 04:46:05 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:46:05 crc kubenswrapper[4832]: healthz check failed Jan 31 04:46:05 crc kubenswrapper[4832]: I0131 04:46:05.891630 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:46:06 crc kubenswrapper[4832]: I0131 04:46:06.893455 4832 patch_prober.go:28] interesting pod/router-default-5444994796-5xpcm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 04:46:06 crc kubenswrapper[4832]: [+]has-synced ok Jan 31 04:46:06 crc kubenswrapper[4832]: [+]process-running ok Jan 31 04:46:06 crc kubenswrapper[4832]: healthz check failed Jan 31 04:46:06 crc kubenswrapper[4832]: I0131 04:46:06.894044 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5xpcm" podUID="4994f771-6cff-4fcf-819e-9f3fcba71534" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 04:46:07 crc kubenswrapper[4832]: I0131 04:46:07.890536 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:46:07 crc kubenswrapper[4832]: I0131 04:46:07.894540 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5xpcm" Jan 31 04:46:09 crc kubenswrapper[4832]: I0131 04:46:09.018258 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:46:09 crc kubenswrapper[4832]: I0131 04:46:09.032808 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"592be45a-a98b-41aa-8f9e-1fb266fce236","Type":"ContainerDied","Data":"7fe6198deaf314436f2750b3eebab091e7e601f129f898cdd38306cce0b61ced"} Jan 31 04:46:09 crc kubenswrapper[4832]: I0131 04:46:09.032854 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fe6198deaf314436f2750b3eebab091e7e601f129f898cdd38306cce0b61ced" Jan 31 04:46:09 crc kubenswrapper[4832]: I0131 04:46:09.032918 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 04:46:09 crc kubenswrapper[4832]: I0131 04:46:09.080917 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/592be45a-a98b-41aa-8f9e-1fb266fce236-kubelet-dir\") pod \"592be45a-a98b-41aa-8f9e-1fb266fce236\" (UID: \"592be45a-a98b-41aa-8f9e-1fb266fce236\") " Jan 31 04:46:09 crc kubenswrapper[4832]: I0131 04:46:09.081027 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/592be45a-a98b-41aa-8f9e-1fb266fce236-kube-api-access\") pod \"592be45a-a98b-41aa-8f9e-1fb266fce236\" (UID: \"592be45a-a98b-41aa-8f9e-1fb266fce236\") " Jan 31 04:46:09 crc kubenswrapper[4832]: I0131 04:46:09.081018 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/592be45a-a98b-41aa-8f9e-1fb266fce236-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "592be45a-a98b-41aa-8f9e-1fb266fce236" (UID: "592be45a-a98b-41aa-8f9e-1fb266fce236"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:46:09 crc kubenswrapper[4832]: I0131 04:46:09.081339 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/592be45a-a98b-41aa-8f9e-1fb266fce236-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:46:09 crc kubenswrapper[4832]: I0131 04:46:09.087288 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/592be45a-a98b-41aa-8f9e-1fb266fce236-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "592be45a-a98b-41aa-8f9e-1fb266fce236" (UID: "592be45a-a98b-41aa-8f9e-1fb266fce236"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:46:09 crc kubenswrapper[4832]: I0131 04:46:09.182871 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/592be45a-a98b-41aa-8f9e-1fb266fce236-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:46:13 crc kubenswrapper[4832]: I0131 04:46:13.562236 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:46:14 crc kubenswrapper[4832]: I0131 04:46:14.827504 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:46:14 crc kubenswrapper[4832]: I0131 04:46:14.831264 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:46:18 crc kubenswrapper[4832]: I0131 04:46:18.540220 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:46:18 crc kubenswrapper[4832]: I0131 04:46:18.540331 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:46:24 crc kubenswrapper[4832]: E0131 04:46:24.980389 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 04:46:24 crc kubenswrapper[4832]: E0131 04:46:24.981045 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j45hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5sr9c_openshift-marketplace(383f7aea-cd36-47b8-8a08-fbb8a60e9ab5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 04:46:24 crc kubenswrapper[4832]: E0131 04:46:24.982893 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5sr9c" podUID="383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" Jan 31 04:46:25 crc kubenswrapper[4832]: I0131 04:46:25.274081 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rg2nc" Jan 31 04:46:25 crc kubenswrapper[4832]: E0131 04:46:25.823716 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 04:46:25 crc kubenswrapper[4832]: E0131 04:46:25.823980 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfbt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ns9df_openshift-marketplace(cfe18813-bbee-404e-a100-ea164dcd83ec): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 04:46:25 crc kubenswrapper[4832]: E0131 04:46:25.825153 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ns9df" podUID="cfe18813-bbee-404e-a100-ea164dcd83ec" Jan 31 04:46:28 crc kubenswrapper[4832]: E0131 04:46:28.908995 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5sr9c" podUID="383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" Jan 31 04:46:28 crc kubenswrapper[4832]: E0131 04:46:28.909089 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ns9df" podUID="cfe18813-bbee-404e-a100-ea164dcd83ec" Jan 31 04:46:29 crc kubenswrapper[4832]: E0131 04:46:29.002752 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 04:46:29 crc kubenswrapper[4832]: E0131 04:46:29.002945 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wl4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bdq5m_openshift-marketplace(8ffeb3e3-9f51-40a2-9c02-988853dc2bc0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 04:46:29 crc kubenswrapper[4832]: E0131 04:46:29.004196 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-bdq5m" podUID="8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" Jan 31 04:46:32 crc kubenswrapper[4832]: E0131 04:46:32.156713 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bdq5m" podUID="8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.525899 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 04:46:32 crc kubenswrapper[4832]: E0131 04:46:32.526214 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b701fb1e-3de6-4a54-9c06-3afca9a1cc3c" containerName="pruner" Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.526227 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b701fb1e-3de6-4a54-9c06-3afca9a1cc3c" containerName="pruner" Jan 31 04:46:32 crc kubenswrapper[4832]: E0131 04:46:32.526249 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592be45a-a98b-41aa-8f9e-1fb266fce236" containerName="pruner" Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.526292 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="592be45a-a98b-41aa-8f9e-1fb266fce236" containerName="pruner" Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.526422 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="592be45a-a98b-41aa-8f9e-1fb266fce236" containerName="pruner" Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.526439 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b701fb1e-3de6-4a54-9c06-3afca9a1cc3c" containerName="pruner" Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.527275 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.532547 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.536115 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.536248 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.686964 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a89f8f7-e20b-4197-8b66-74921fc4ae43-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5a89f8f7-e20b-4197-8b66-74921fc4ae43\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.687121 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a89f8f7-e20b-4197-8b66-74921fc4ae43-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5a89f8f7-e20b-4197-8b66-74921fc4ae43\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.788696 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a89f8f7-e20b-4197-8b66-74921fc4ae43-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5a89f8f7-e20b-4197-8b66-74921fc4ae43\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.788855 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a89f8f7-e20b-4197-8b66-74921fc4ae43-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5a89f8f7-e20b-4197-8b66-74921fc4ae43\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.788932 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a89f8f7-e20b-4197-8b66-74921fc4ae43-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5a89f8f7-e20b-4197-8b66-74921fc4ae43\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.811963 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a89f8f7-e20b-4197-8b66-74921fc4ae43-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5a89f8f7-e20b-4197-8b66-74921fc4ae43\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:46:32 crc kubenswrapper[4832]: I0131 04:46:32.859640 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:46:34 crc kubenswrapper[4832]: E0131 04:46:34.778951 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 04:46:34 crc kubenswrapper[4832]: E0131 04:46:34.779554 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fm5qj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6spnx_openshift-marketplace(ab87f796-68ff-4e78-b131-813998e00539): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 04:46:34 crc kubenswrapper[4832]: E0131 04:46:34.780924 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6spnx" podUID="ab87f796-68ff-4e78-b131-813998e00539" Jan 31 04:46:34 crc kubenswrapper[4832]: E0131 04:46:34.786546 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 04:46:34 crc kubenswrapper[4832]: E0131 04:46:34.786820 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8c49t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4bzcs_openshift-marketplace(1c79b0fe-2283-47ce-a36d-800a09a3f29e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 04:46:34 crc kubenswrapper[4832]: E0131 04:46:34.788011 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4bzcs" podUID="1c79b0fe-2283-47ce-a36d-800a09a3f29e" Jan 31 04:46:35 crc kubenswrapper[4832]: I0131 04:46:35.147167 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 04:46:35 crc kubenswrapper[4832]: I0131 04:46:35.191157 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5a89f8f7-e20b-4197-8b66-74921fc4ae43","Type":"ContainerStarted","Data":"b6bd0d366d01c13fe10bbeefe278d0923bc63d6d3348786f5c75f68bcfb14723"} Jan 31 04:46:35 crc kubenswrapper[4832]: E0131 04:46:35.194331 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4bzcs" podUID="1c79b0fe-2283-47ce-a36d-800a09a3f29e" Jan 31 04:46:35 crc kubenswrapper[4832]: E0131 04:46:35.195363 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6spnx" podUID="ab87f796-68ff-4e78-b131-813998e00539" Jan 31 04:46:35 crc kubenswrapper[4832]: E0131 04:46:35.741239 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 04:46:35 crc kubenswrapper[4832]: E0131 04:46:35.741908 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h65qw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-s577w_openshift-marketplace(5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 04:46:35 crc kubenswrapper[4832]: E0131 04:46:35.743419 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-s577w" podUID="5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" Jan 31 04:46:35 crc kubenswrapper[4832]: E0131 04:46:35.839587 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 04:46:35 crc kubenswrapper[4832]: E0131 04:46:35.840030 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nj49g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-f26d5_openshift-marketplace(fadd223c-2d95-4429-be17-6f15be7dbbbc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 04:46:35 crc kubenswrapper[4832]: E0131 04:46:35.841245 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-f26d5" podUID="fadd223c-2d95-4429-be17-6f15be7dbbbc" Jan 31 04:46:35 crc kubenswrapper[4832]: E0131 04:46:35.907538 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 04:46:35 crc kubenswrapper[4832]: E0131 04:46:35.907760 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8f77t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dhxc2_openshift-marketplace(1980967b-05ed-409d-9774-a946cecfda9c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 04:46:35 crc kubenswrapper[4832]: E0131 04:46:35.909903 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dhxc2" podUID="1980967b-05ed-409d-9774-a946cecfda9c" Jan 31 04:46:36 crc kubenswrapper[4832]: I0131 04:46:36.197385 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5a89f8f7-e20b-4197-8b66-74921fc4ae43","Type":"ContainerStarted","Data":"9acd506587dada3ee05fc57954dc8792cc1a30151ab4a093246c96fdf3918340"} Jan 31 04:46:36 crc kubenswrapper[4832]: E0131 04:46:36.199838 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dhxc2" podUID="1980967b-05ed-409d-9774-a946cecfda9c" Jan 31 04:46:36 crc kubenswrapper[4832]: E0131 04:46:36.199985 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-f26d5" podUID="fadd223c-2d95-4429-be17-6f15be7dbbbc" Jan 31 04:46:36 crc kubenswrapper[4832]: E0131 04:46:36.200142 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-s577w" podUID="5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" Jan 31 04:46:36 crc kubenswrapper[4832]: I0131 04:46:36.236545 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=4.236517634 podStartE2EDuration="4.236517634s" podCreationTimestamp="2026-01-31 04:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:46:36.230602241 +0000 UTC m=+205.179423946" watchObservedRunningTime="2026-01-31 04:46:36.236517634 +0000 UTC m=+205.185339329" Jan 31 04:46:37 crc kubenswrapper[4832]: I0131 04:46:37.204935 4832 generic.go:334] "Generic (PLEG): container finished" podID="5a89f8f7-e20b-4197-8b66-74921fc4ae43" containerID="9acd506587dada3ee05fc57954dc8792cc1a30151ab4a093246c96fdf3918340" exitCode=0 Jan 31 04:46:37 crc kubenswrapper[4832]: I0131 04:46:37.205068 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5a89f8f7-e20b-4197-8b66-74921fc4ae43","Type":"ContainerDied","Data":"9acd506587dada3ee05fc57954dc8792cc1a30151ab4a093246c96fdf3918340"} Jan 31 04:46:38 crc kubenswrapper[4832]: I0131 04:46:38.516819 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:46:38 crc kubenswrapper[4832]: I0131 04:46:38.682288 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a89f8f7-e20b-4197-8b66-74921fc4ae43-kube-api-access\") pod \"5a89f8f7-e20b-4197-8b66-74921fc4ae43\" (UID: \"5a89f8f7-e20b-4197-8b66-74921fc4ae43\") " Jan 31 04:46:38 crc kubenswrapper[4832]: I0131 04:46:38.682535 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a89f8f7-e20b-4197-8b66-74921fc4ae43-kubelet-dir\") pod \"5a89f8f7-e20b-4197-8b66-74921fc4ae43\" (UID: \"5a89f8f7-e20b-4197-8b66-74921fc4ae43\") " Jan 31 04:46:38 crc kubenswrapper[4832]: I0131 04:46:38.682920 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a89f8f7-e20b-4197-8b66-74921fc4ae43-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5a89f8f7-e20b-4197-8b66-74921fc4ae43" (UID: "5a89f8f7-e20b-4197-8b66-74921fc4ae43"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:46:38 crc kubenswrapper[4832]: I0131 04:46:38.688811 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a89f8f7-e20b-4197-8b66-74921fc4ae43-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5a89f8f7-e20b-4197-8b66-74921fc4ae43" (UID: "5a89f8f7-e20b-4197-8b66-74921fc4ae43"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:46:38 crc kubenswrapper[4832]: I0131 04:46:38.784108 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a89f8f7-e20b-4197-8b66-74921fc4ae43-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:46:38 crc kubenswrapper[4832]: I0131 04:46:38.784140 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a89f8f7-e20b-4197-8b66-74921fc4ae43-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:46:39 crc kubenswrapper[4832]: I0131 04:46:39.220955 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5a89f8f7-e20b-4197-8b66-74921fc4ae43","Type":"ContainerDied","Data":"b6bd0d366d01c13fe10bbeefe278d0923bc63d6d3348786f5c75f68bcfb14723"} Jan 31 04:46:39 crc kubenswrapper[4832]: I0131 04:46:39.221312 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6bd0d366d01c13fe10bbeefe278d0923bc63d6d3348786f5c75f68bcfb14723" Jan 31 04:46:39 crc kubenswrapper[4832]: I0131 04:46:39.221083 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 04:46:39 crc kubenswrapper[4832]: I0131 04:46:39.725217 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 04:46:39 crc kubenswrapper[4832]: E0131 04:46:39.725551 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a89f8f7-e20b-4197-8b66-74921fc4ae43" containerName="pruner" Jan 31 04:46:39 crc kubenswrapper[4832]: I0131 04:46:39.725619 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a89f8f7-e20b-4197-8b66-74921fc4ae43" containerName="pruner" Jan 31 04:46:39 crc kubenswrapper[4832]: I0131 04:46:39.725740 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a89f8f7-e20b-4197-8b66-74921fc4ae43" containerName="pruner" Jan 31 04:46:39 crc kubenswrapper[4832]: I0131 04:46:39.726134 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:46:39 crc kubenswrapper[4832]: I0131 04:46:39.729407 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 04:46:39 crc kubenswrapper[4832]: I0131 04:46:39.729692 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 04:46:39 crc kubenswrapper[4832]: I0131 04:46:39.742774 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 04:46:39 crc kubenswrapper[4832]: I0131 04:46:39.899795 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8df36146-a4dc-4f8d-830a-aebc8933d8af-var-lock\") pod \"installer-9-crc\" (UID: \"8df36146-a4dc-4f8d-830a-aebc8933d8af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:46:39 crc kubenswrapper[4832]: I0131 04:46:39.900194 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8df36146-a4dc-4f8d-830a-aebc8933d8af-kube-api-access\") pod \"installer-9-crc\" (UID: \"8df36146-a4dc-4f8d-830a-aebc8933d8af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:46:39 crc kubenswrapper[4832]: I0131 04:46:39.900228 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8df36146-a4dc-4f8d-830a-aebc8933d8af-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8df36146-a4dc-4f8d-830a-aebc8933d8af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:46:40 crc kubenswrapper[4832]: I0131 04:46:40.001988 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8df36146-a4dc-4f8d-830a-aebc8933d8af-kube-api-access\") pod \"installer-9-crc\" (UID: \"8df36146-a4dc-4f8d-830a-aebc8933d8af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:46:40 crc kubenswrapper[4832]: I0131 04:46:40.002039 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8df36146-a4dc-4f8d-830a-aebc8933d8af-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8df36146-a4dc-4f8d-830a-aebc8933d8af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:46:40 crc kubenswrapper[4832]: I0131 04:46:40.002104 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8df36146-a4dc-4f8d-830a-aebc8933d8af-var-lock\") pod \"installer-9-crc\" (UID: \"8df36146-a4dc-4f8d-830a-aebc8933d8af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:46:40 crc kubenswrapper[4832]: I0131 04:46:40.002185 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8df36146-a4dc-4f8d-830a-aebc8933d8af-var-lock\") pod \"installer-9-crc\" (UID: \"8df36146-a4dc-4f8d-830a-aebc8933d8af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:46:40 crc kubenswrapper[4832]: I0131 04:46:40.002225 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8df36146-a4dc-4f8d-830a-aebc8933d8af-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8df36146-a4dc-4f8d-830a-aebc8933d8af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:46:40 crc kubenswrapper[4832]: I0131 04:46:40.029824 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8df36146-a4dc-4f8d-830a-aebc8933d8af-kube-api-access\") pod \"installer-9-crc\" (UID: \"8df36146-a4dc-4f8d-830a-aebc8933d8af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:46:40 crc kubenswrapper[4832]: I0131 04:46:40.051905 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:46:40 crc kubenswrapper[4832]: I0131 04:46:40.474462 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 04:46:41 crc kubenswrapper[4832]: I0131 04:46:41.236188 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8df36146-a4dc-4f8d-830a-aebc8933d8af","Type":"ContainerStarted","Data":"1cae4911dd2655fc866729338b3c732fe76a6f853e3aae8d4993cda25fedec1c"} Jan 31 04:46:41 crc kubenswrapper[4832]: I0131 04:46:41.236917 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8df36146-a4dc-4f8d-830a-aebc8933d8af","Type":"ContainerStarted","Data":"ab8d39f98712775cd1e3fc0dce86dcb67a8b1e45bafb5494f5051da0ffa38be2"} Jan 31 04:46:41 crc kubenswrapper[4832]: I0131 04:46:41.263007 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.262986659 podStartE2EDuration="2.262986659s" podCreationTimestamp="2026-01-31 04:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:46:41.257976944 +0000 UTC m=+210.206798629" watchObservedRunningTime="2026-01-31 04:46:41.262986659 +0000 UTC m=+210.211808344" Jan 31 04:46:42 crc kubenswrapper[4832]: I0131 04:46:42.249778 4832 generic.go:334] "Generic (PLEG): container finished" podID="383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" containerID="f93c359f6f923444a5a66286732277e363cbdb5a973b3120ea1b549149fe3ba9" exitCode=0 Jan 31 04:46:42 crc kubenswrapper[4832]: I0131 04:46:42.249887 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sr9c" event={"ID":"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5","Type":"ContainerDied","Data":"f93c359f6f923444a5a66286732277e363cbdb5a973b3120ea1b549149fe3ba9"} Jan 31 04:46:43 crc kubenswrapper[4832]: I0131 04:46:43.258241 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sr9c" event={"ID":"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5","Type":"ContainerStarted","Data":"f8d6f7016f68a7a99e041f58a034220a881b25bd2a79ca9ac5e9ca8670f90209"} Jan 31 04:46:43 crc kubenswrapper[4832]: I0131 04:46:43.295457 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5sr9c" podStartSLOduration=2.217160351 podStartE2EDuration="51.295432708s" podCreationTimestamp="2026-01-31 04:45:52 +0000 UTC" firstStartedPulling="2026-01-31 04:45:53.701369641 +0000 UTC m=+162.650191326" lastFinishedPulling="2026-01-31 04:46:42.779641998 +0000 UTC m=+211.728463683" observedRunningTime="2026-01-31 04:46:43.290191616 +0000 UTC m=+212.239013331" watchObservedRunningTime="2026-01-31 04:46:43.295432708 +0000 UTC m=+212.244254393" Jan 31 04:46:43 crc kubenswrapper[4832]: I0131 04:46:43.402776 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-94sbp"] Jan 31 04:46:44 crc kubenswrapper[4832]: I0131 04:46:44.271648 4832 generic.go:334] "Generic (PLEG): container finished" podID="cfe18813-bbee-404e-a100-ea164dcd83ec" containerID="28d032527cfa47dccd485633067fec50c89160f5d4dd7934f74e078e0a907ca7" exitCode=0 Jan 31 04:46:44 crc kubenswrapper[4832]: I0131 04:46:44.271737 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ns9df" event={"ID":"cfe18813-bbee-404e-a100-ea164dcd83ec","Type":"ContainerDied","Data":"28d032527cfa47dccd485633067fec50c89160f5d4dd7934f74e078e0a907ca7"} Jan 31 04:46:45 crc kubenswrapper[4832]: I0131 04:46:45.279932 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ns9df" event={"ID":"cfe18813-bbee-404e-a100-ea164dcd83ec","Type":"ContainerStarted","Data":"d76038a549b819fa906cc027e5878e6c05616bcbe9e6681efda571ae504feddc"} Jan 31 04:46:45 crc kubenswrapper[4832]: I0131 04:46:45.300380 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ns9df" podStartSLOduration=3.288294557 podStartE2EDuration="53.300362646s" podCreationTimestamp="2026-01-31 04:45:52 +0000 UTC" firstStartedPulling="2026-01-31 04:45:54.694540389 +0000 UTC m=+163.643362074" lastFinishedPulling="2026-01-31 04:46:44.706608478 +0000 UTC m=+213.655430163" observedRunningTime="2026-01-31 04:46:45.29758149 +0000 UTC m=+214.246403175" watchObservedRunningTime="2026-01-31 04:46:45.300362646 +0000 UTC m=+214.249184321" Jan 31 04:46:46 crc kubenswrapper[4832]: I0131 04:46:46.286947 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdq5m" event={"ID":"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0","Type":"ContainerStarted","Data":"7f535b7b02cbe50ec4c2356bc806e796e80010416f3355b9a67e87131d249458"} Jan 31 04:46:47 crc kubenswrapper[4832]: I0131 04:46:47.296544 4832 generic.go:334] "Generic (PLEG): container finished" podID="8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" containerID="7f535b7b02cbe50ec4c2356bc806e796e80010416f3355b9a67e87131d249458" exitCode=0 Jan 31 04:46:47 crc kubenswrapper[4832]: I0131 04:46:47.296626 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdq5m" event={"ID":"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0","Type":"ContainerDied","Data":"7f535b7b02cbe50ec4c2356bc806e796e80010416f3355b9a67e87131d249458"} Jan 31 04:46:48 crc kubenswrapper[4832]: I0131 04:46:48.306827 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdq5m" event={"ID":"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0","Type":"ContainerStarted","Data":"0f96b522559d4a8f8f5ea347eb647e323062922f4005737781efe3deeb8dd973"} Jan 31 04:46:48 crc kubenswrapper[4832]: I0131 04:46:48.539466 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:46:48 crc kubenswrapper[4832]: I0131 04:46:48.539524 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:46:48 crc kubenswrapper[4832]: I0131 04:46:48.539620 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:46:48 crc kubenswrapper[4832]: I0131 04:46:48.540082 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:46:48 crc kubenswrapper[4832]: I0131 04:46:48.540281 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720" gracePeriod=600 Jan 31 04:46:49 crc kubenswrapper[4832]: I0131 04:46:49.318591 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720" exitCode=0 Jan 31 04:46:49 crc kubenswrapper[4832]: I0131 04:46:49.319060 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720"} Jan 31 04:46:49 crc kubenswrapper[4832]: I0131 04:46:49.319098 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"cb994e6af299c20060c7a17af978747c41f96b8a52a6c65cc669ce278ad24cd9"} Jan 31 04:46:49 crc kubenswrapper[4832]: I0131 04:46:49.321315 4832 generic.go:334] "Generic (PLEG): container finished" podID="1980967b-05ed-409d-9774-a946cecfda9c" containerID="0885414ce46c2e06092d9034b0878aa6f6ee4d9112e92258d03b38262f3c25a8" exitCode=0 Jan 31 04:46:49 crc kubenswrapper[4832]: I0131 04:46:49.321362 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhxc2" event={"ID":"1980967b-05ed-409d-9774-a946cecfda9c","Type":"ContainerDied","Data":"0885414ce46c2e06092d9034b0878aa6f6ee4d9112e92258d03b38262f3c25a8"} Jan 31 04:46:49 crc kubenswrapper[4832]: I0131 04:46:49.325251 4832 generic.go:334] "Generic (PLEG): container finished" podID="ab87f796-68ff-4e78-b131-813998e00539" containerID="101728c1cda978159686cb918a3a571c51d07b110f8613c50c0cddd30af6eadf" exitCode=0 Jan 31 04:46:49 crc kubenswrapper[4832]: I0131 04:46:49.325297 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6spnx" event={"ID":"ab87f796-68ff-4e78-b131-813998e00539","Type":"ContainerDied","Data":"101728c1cda978159686cb918a3a571c51d07b110f8613c50c0cddd30af6eadf"} Jan 31 04:46:49 crc kubenswrapper[4832]: I0131 04:46:49.344523 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bdq5m" podStartSLOduration=3.26760818 podStartE2EDuration="53.344500974s" podCreationTimestamp="2026-01-31 04:45:56 +0000 UTC" firstStartedPulling="2026-01-31 04:45:57.853768828 +0000 UTC m=+166.802590513" lastFinishedPulling="2026-01-31 04:46:47.930661622 +0000 UTC m=+216.879483307" observedRunningTime="2026-01-31 04:46:48.327632799 +0000 UTC m=+217.276454484" watchObservedRunningTime="2026-01-31 04:46:49.344500974 +0000 UTC m=+218.293322659" Jan 31 04:46:50 crc kubenswrapper[4832]: I0131 04:46:50.333087 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhxc2" event={"ID":"1980967b-05ed-409d-9774-a946cecfda9c","Type":"ContainerStarted","Data":"4a95f60cdf4b28423b8ab588d989ff9daae24b4ad47751b790b6f16d3768f5c7"} Jan 31 04:46:50 crc kubenswrapper[4832]: I0131 04:46:50.336864 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6spnx" event={"ID":"ab87f796-68ff-4e78-b131-813998e00539","Type":"ContainerStarted","Data":"8c113c36c678a1b2879d697905e6508d5550da1bbddba013bb69357c8b80f796"} Jan 31 04:46:50 crc kubenswrapper[4832]: I0131 04:46:50.353799 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dhxc2" podStartSLOduration=2.357472494 podStartE2EDuration="57.353779904s" podCreationTimestamp="2026-01-31 04:45:53 +0000 UTC" firstStartedPulling="2026-01-31 04:45:54.706089234 +0000 UTC m=+163.654910919" lastFinishedPulling="2026-01-31 04:46:49.702396644 +0000 UTC m=+218.651218329" observedRunningTime="2026-01-31 04:46:50.351403771 +0000 UTC m=+219.300225456" watchObservedRunningTime="2026-01-31 04:46:50.353779904 +0000 UTC m=+219.302601589" Jan 31 04:46:50 crc kubenswrapper[4832]: I0131 04:46:50.419535 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6spnx" podStartSLOduration=3.434464476 podStartE2EDuration="56.419512996s" podCreationTimestamp="2026-01-31 04:45:54 +0000 UTC" firstStartedPulling="2026-01-31 04:45:56.76347159 +0000 UTC m=+165.712293275" lastFinishedPulling="2026-01-31 04:46:49.74852011 +0000 UTC m=+218.697341795" observedRunningTime="2026-01-31 04:46:50.414168621 +0000 UTC m=+219.362990316" watchObservedRunningTime="2026-01-31 04:46:50.419512996 +0000 UTC m=+219.368334681" Jan 31 04:46:51 crc kubenswrapper[4832]: I0131 04:46:51.343873 4832 generic.go:334] "Generic (PLEG): container finished" podID="1c79b0fe-2283-47ce-a36d-800a09a3f29e" containerID="29619407fc95baf7a31cf6818cde992d9d490a90e4366e0c97d5da9e2146f763" exitCode=0 Jan 31 04:46:51 crc kubenswrapper[4832]: I0131 04:46:51.344225 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bzcs" event={"ID":"1c79b0fe-2283-47ce-a36d-800a09a3f29e","Type":"ContainerDied","Data":"29619407fc95baf7a31cf6818cde992d9d490a90e4366e0c97d5da9e2146f763"} Jan 31 04:46:52 crc kubenswrapper[4832]: I0131 04:46:52.350873 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bzcs" event={"ID":"1c79b0fe-2283-47ce-a36d-800a09a3f29e","Type":"ContainerStarted","Data":"25af3f832c74656b141ec093db206fc247fa7aa4a8386868edf119b665c3907f"} Jan 31 04:46:52 crc kubenswrapper[4832]: I0131 04:46:52.353199 4832 generic.go:334] "Generic (PLEG): container finished" podID="fadd223c-2d95-4429-be17-6f15be7dbbbc" containerID="297ba5c9ffff3caae183ccde1505191ade6c21258993d7a5942606f8051c7a2f" exitCode=0 Jan 31 04:46:52 crc kubenswrapper[4832]: I0131 04:46:52.353236 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f26d5" event={"ID":"fadd223c-2d95-4429-be17-6f15be7dbbbc","Type":"ContainerDied","Data":"297ba5c9ffff3caae183ccde1505191ade6c21258993d7a5942606f8051c7a2f"} Jan 31 04:46:52 crc kubenswrapper[4832]: I0131 04:46:52.371381 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4bzcs" podStartSLOduration=2.298510465 podStartE2EDuration="58.371360655s" podCreationTimestamp="2026-01-31 04:45:54 +0000 UTC" firstStartedPulling="2026-01-31 04:45:55.726415845 +0000 UTC m=+164.675237530" lastFinishedPulling="2026-01-31 04:46:51.799266035 +0000 UTC m=+220.748087720" observedRunningTime="2026-01-31 04:46:52.369663891 +0000 UTC m=+221.318485586" watchObservedRunningTime="2026-01-31 04:46:52.371360655 +0000 UTC m=+221.320182340" Jan 31 04:46:52 crc kubenswrapper[4832]: I0131 04:46:52.808717 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:46:52 crc kubenswrapper[4832]: I0131 04:46:52.808767 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:46:53 crc kubenswrapper[4832]: I0131 04:46:53.235639 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:46:53 crc kubenswrapper[4832]: I0131 04:46:53.235732 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:46:53 crc kubenswrapper[4832]: I0131 04:46:53.433813 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:46:53 crc kubenswrapper[4832]: I0131 04:46:53.434265 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:46:53 crc kubenswrapper[4832]: I0131 04:46:53.774515 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:46:53 crc kubenswrapper[4832]: I0131 04:46:53.781938 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:46:53 crc kubenswrapper[4832]: I0131 04:46:53.783260 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:46:53 crc kubenswrapper[4832]: I0131 04:46:53.827445 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:46:53 crc kubenswrapper[4832]: I0131 04:46:53.831473 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:46:54 crc kubenswrapper[4832]: I0131 04:46:54.367367 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s577w" event={"ID":"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8","Type":"ContainerStarted","Data":"fbb6e37918034c539953b944175b91aa8b3c029c8350a3688b370289ec8c282e"} Jan 31 04:46:54 crc kubenswrapper[4832]: I0131 04:46:54.370339 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f26d5" event={"ID":"fadd223c-2d95-4429-be17-6f15be7dbbbc","Type":"ContainerStarted","Data":"66fe5b8a177ba3b91f49f50cce134336e894b89d0c895c326c2405f0ccafd77d"} Jan 31 04:46:54 crc kubenswrapper[4832]: I0131 04:46:54.426527 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:46:54 crc kubenswrapper[4832]: I0131 04:46:54.451754 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f26d5" podStartSLOduration=3.266264352 podStartE2EDuration="1m2.451735115s" podCreationTimestamp="2026-01-31 04:45:52 +0000 UTC" firstStartedPulling="2026-01-31 04:45:54.71737289 +0000 UTC m=+163.666194575" lastFinishedPulling="2026-01-31 04:46:53.902843663 +0000 UTC m=+222.851665338" observedRunningTime="2026-01-31 04:46:54.411069078 +0000 UTC m=+223.359890773" watchObservedRunningTime="2026-01-31 04:46:54.451735115 +0000 UTC m=+223.400556800" Jan 31 04:46:54 crc kubenswrapper[4832]: I0131 04:46:54.815886 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:46:54 crc kubenswrapper[4832]: I0131 04:46:54.817629 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:46:54 crc kubenswrapper[4832]: I0131 04:46:54.854970 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:46:55 crc kubenswrapper[4832]: I0131 04:46:55.216780 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:46:55 crc kubenswrapper[4832]: I0131 04:46:55.216833 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:46:55 crc kubenswrapper[4832]: I0131 04:46:55.269324 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:46:55 crc kubenswrapper[4832]: I0131 04:46:55.377616 4832 generic.go:334] "Generic (PLEG): container finished" podID="5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" containerID="fbb6e37918034c539953b944175b91aa8b3c029c8350a3688b370289ec8c282e" exitCode=0 Jan 31 04:46:55 crc kubenswrapper[4832]: I0131 04:46:55.377807 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s577w" event={"ID":"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8","Type":"ContainerDied","Data":"fbb6e37918034c539953b944175b91aa8b3c029c8350a3688b370289ec8c282e"} Jan 31 04:46:55 crc kubenswrapper[4832]: I0131 04:46:55.418728 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:46:56 crc kubenswrapper[4832]: I0131 04:46:56.283802 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6spnx"] Jan 31 04:46:56 crc kubenswrapper[4832]: I0131 04:46:56.425512 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:46:56 crc kubenswrapper[4832]: I0131 04:46:56.787398 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:46:56 crc kubenswrapper[4832]: I0131 04:46:56.787455 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:46:56 crc kubenswrapper[4832]: I0131 04:46:56.834186 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:46:56 crc kubenswrapper[4832]: I0131 04:46:56.884354 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ns9df"] Jan 31 04:46:56 crc kubenswrapper[4832]: I0131 04:46:56.884923 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ns9df" podUID="cfe18813-bbee-404e-a100-ea164dcd83ec" containerName="registry-server" containerID="cri-o://d76038a549b819fa906cc027e5878e6c05616bcbe9e6681efda571ae504feddc" gracePeriod=2 Jan 31 04:46:58 crc kubenswrapper[4832]: I0131 04:46:58.108052 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s577w" event={"ID":"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8","Type":"ContainerStarted","Data":"6f366ead54192a52b88fb0b9d46ffdf0943fec04026f51a06990766fa82d56c9"} Jan 31 04:46:58 crc kubenswrapper[4832]: I0131 04:46:58.110308 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6spnx" podUID="ab87f796-68ff-4e78-b131-813998e00539" containerName="registry-server" containerID="cri-o://8c113c36c678a1b2879d697905e6508d5550da1bbddba013bb69357c8b80f796" gracePeriod=2 Jan 31 04:46:58 crc kubenswrapper[4832]: I0131 04:46:58.140056 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s577w" podStartSLOduration=4.687396812 podStartE2EDuration="1m3.140034675s" podCreationTimestamp="2026-01-31 04:45:55 +0000 UTC" firstStartedPulling="2026-01-31 04:45:57.861489666 +0000 UTC m=+166.810311351" lastFinishedPulling="2026-01-31 04:46:56.314127529 +0000 UTC m=+225.262949214" observedRunningTime="2026-01-31 04:46:58.133663898 +0000 UTC m=+227.082485593" watchObservedRunningTime="2026-01-31 04:46:58.140034675 +0000 UTC m=+227.088856360" Jan 31 04:46:58 crc kubenswrapper[4832]: I0131 04:46:58.169242 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:46:58 crc kubenswrapper[4832]: I0131 04:46:58.681417 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhxc2"] Jan 31 04:46:58 crc kubenswrapper[4832]: I0131 04:46:58.682193 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dhxc2" podUID="1980967b-05ed-409d-9774-a946cecfda9c" containerName="registry-server" containerID="cri-o://4a95f60cdf4b28423b8ab588d989ff9daae24b4ad47751b790b6f16d3768f5c7" gracePeriod=2 Jan 31 04:46:59 crc kubenswrapper[4832]: I0131 04:46:59.115220 4832 generic.go:334] "Generic (PLEG): container finished" podID="cfe18813-bbee-404e-a100-ea164dcd83ec" containerID="d76038a549b819fa906cc027e5878e6c05616bcbe9e6681efda571ae504feddc" exitCode=0 Jan 31 04:46:59 crc kubenswrapper[4832]: I0131 04:46:59.115334 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ns9df" event={"ID":"cfe18813-bbee-404e-a100-ea164dcd83ec","Type":"ContainerDied","Data":"d76038a549b819fa906cc027e5878e6c05616bcbe9e6681efda571ae504feddc"} Jan 31 04:46:59 crc kubenswrapper[4832]: I0131 04:46:59.282288 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bdq5m"] Jan 31 04:46:59 crc kubenswrapper[4832]: I0131 04:46:59.801944 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:46:59 crc kubenswrapper[4832]: I0131 04:46:59.994684 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfe18813-bbee-404e-a100-ea164dcd83ec-utilities\") pod \"cfe18813-bbee-404e-a100-ea164dcd83ec\" (UID: \"cfe18813-bbee-404e-a100-ea164dcd83ec\") " Jan 31 04:46:59 crc kubenswrapper[4832]: I0131 04:46:59.994818 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfbt7\" (UniqueName: \"kubernetes.io/projected/cfe18813-bbee-404e-a100-ea164dcd83ec-kube-api-access-mfbt7\") pod \"cfe18813-bbee-404e-a100-ea164dcd83ec\" (UID: \"cfe18813-bbee-404e-a100-ea164dcd83ec\") " Jan 31 04:46:59 crc kubenswrapper[4832]: I0131 04:46:59.994870 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfe18813-bbee-404e-a100-ea164dcd83ec-catalog-content\") pod \"cfe18813-bbee-404e-a100-ea164dcd83ec\" (UID: \"cfe18813-bbee-404e-a100-ea164dcd83ec\") " Jan 31 04:46:59 crc kubenswrapper[4832]: I0131 04:46:59.996510 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfe18813-bbee-404e-a100-ea164dcd83ec-utilities" (OuterVolumeSpecName: "utilities") pod "cfe18813-bbee-404e-a100-ea164dcd83ec" (UID: "cfe18813-bbee-404e-a100-ea164dcd83ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.003680 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe18813-bbee-404e-a100-ea164dcd83ec-kube-api-access-mfbt7" (OuterVolumeSpecName: "kube-api-access-mfbt7") pod "cfe18813-bbee-404e-a100-ea164dcd83ec" (UID: "cfe18813-bbee-404e-a100-ea164dcd83ec"). InnerVolumeSpecName "kube-api-access-mfbt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.051184 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfe18813-bbee-404e-a100-ea164dcd83ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfe18813-bbee-404e-a100-ea164dcd83ec" (UID: "cfe18813-bbee-404e-a100-ea164dcd83ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.097616 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfe18813-bbee-404e-a100-ea164dcd83ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.097663 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfbt7\" (UniqueName: \"kubernetes.io/projected/cfe18813-bbee-404e-a100-ea164dcd83ec-kube-api-access-mfbt7\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.097676 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfe18813-bbee-404e-a100-ea164dcd83ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.124258 4832 generic.go:334] "Generic (PLEG): container finished" podID="1980967b-05ed-409d-9774-a946cecfda9c" containerID="4a95f60cdf4b28423b8ab588d989ff9daae24b4ad47751b790b6f16d3768f5c7" exitCode=0 Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.124326 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhxc2" event={"ID":"1980967b-05ed-409d-9774-a946cecfda9c","Type":"ContainerDied","Data":"4a95f60cdf4b28423b8ab588d989ff9daae24b4ad47751b790b6f16d3768f5c7"} Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.127672 4832 generic.go:334] "Generic (PLEG): container finished" podID="ab87f796-68ff-4e78-b131-813998e00539" containerID="8c113c36c678a1b2879d697905e6508d5550da1bbddba013bb69357c8b80f796" exitCode=0 Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.127785 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6spnx" event={"ID":"ab87f796-68ff-4e78-b131-813998e00539","Type":"ContainerDied","Data":"8c113c36c678a1b2879d697905e6508d5550da1bbddba013bb69357c8b80f796"} Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.130133 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ns9df" event={"ID":"cfe18813-bbee-404e-a100-ea164dcd83ec","Type":"ContainerDied","Data":"5d65914a7639dcaaf64302de082c7bdd6a4ca6a74043d5d060c609268b9dd9f3"} Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.130184 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ns9df" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.130215 4832 scope.go:117] "RemoveContainer" containerID="d76038a549b819fa906cc027e5878e6c05616bcbe9e6681efda571ae504feddc" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.130519 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bdq5m" podUID="8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" containerName="registry-server" containerID="cri-o://0f96b522559d4a8f8f5ea347eb647e323062922f4005737781efe3deeb8dd973" gracePeriod=2 Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.168495 4832 scope.go:117] "RemoveContainer" containerID="28d032527cfa47dccd485633067fec50c89160f5d4dd7934f74e078e0a907ca7" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.170010 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ns9df"] Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.176440 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ns9df"] Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.205482 4832 scope.go:117] "RemoveContainer" containerID="ad4876d71ccc76d6044ca0409dd9c74ed8c43d60f2a616ac88c1ec2669e48957" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.332837 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.504057 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab87f796-68ff-4e78-b131-813998e00539-catalog-content\") pod \"ab87f796-68ff-4e78-b131-813998e00539\" (UID: \"ab87f796-68ff-4e78-b131-813998e00539\") " Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.504280 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab87f796-68ff-4e78-b131-813998e00539-utilities\") pod \"ab87f796-68ff-4e78-b131-813998e00539\" (UID: \"ab87f796-68ff-4e78-b131-813998e00539\") " Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.504326 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm5qj\" (UniqueName: \"kubernetes.io/projected/ab87f796-68ff-4e78-b131-813998e00539-kube-api-access-fm5qj\") pod \"ab87f796-68ff-4e78-b131-813998e00539\" (UID: \"ab87f796-68ff-4e78-b131-813998e00539\") " Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.507648 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab87f796-68ff-4e78-b131-813998e00539-utilities" (OuterVolumeSpecName: "utilities") pod "ab87f796-68ff-4e78-b131-813998e00539" (UID: "ab87f796-68ff-4e78-b131-813998e00539"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.512069 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab87f796-68ff-4e78-b131-813998e00539-kube-api-access-fm5qj" (OuterVolumeSpecName: "kube-api-access-fm5qj") pod "ab87f796-68ff-4e78-b131-813998e00539" (UID: "ab87f796-68ff-4e78-b131-813998e00539"). InnerVolumeSpecName "kube-api-access-fm5qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.532788 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab87f796-68ff-4e78-b131-813998e00539-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab87f796-68ff-4e78-b131-813998e00539" (UID: "ab87f796-68ff-4e78-b131-813998e00539"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.606928 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab87f796-68ff-4e78-b131-813998e00539-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.606962 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab87f796-68ff-4e78-b131-813998e00539-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:00 crc kubenswrapper[4832]: I0131 04:47:00.606973 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm5qj\" (UniqueName: \"kubernetes.io/projected/ab87f796-68ff-4e78-b131-813998e00539-kube-api-access-fm5qj\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.137590 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6spnx" event={"ID":"ab87f796-68ff-4e78-b131-813998e00539","Type":"ContainerDied","Data":"20824852933ee24cdd78770dcd1244a015c4ce97729dd6a6727f4a5c5837515a"} Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.137667 4832 scope.go:117] "RemoveContainer" containerID="8c113c36c678a1b2879d697905e6508d5550da1bbddba013bb69357c8b80f796" Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.137830 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6spnx" Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.159044 4832 scope.go:117] "RemoveContainer" containerID="101728c1cda978159686cb918a3a571c51d07b110f8613c50c0cddd30af6eadf" Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.178137 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6spnx"] Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.180622 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6spnx"] Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.191977 4832 scope.go:117] "RemoveContainer" containerID="d0970066870d0b5bdc276c3de456d10cc38f9105a607bf2cdd6144bf8d5b9ff3" Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.358941 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.534098 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1980967b-05ed-409d-9774-a946cecfda9c-utilities\") pod \"1980967b-05ed-409d-9774-a946cecfda9c\" (UID: \"1980967b-05ed-409d-9774-a946cecfda9c\") " Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.534160 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1980967b-05ed-409d-9774-a946cecfda9c-catalog-content\") pod \"1980967b-05ed-409d-9774-a946cecfda9c\" (UID: \"1980967b-05ed-409d-9774-a946cecfda9c\") " Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.534288 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f77t\" (UniqueName: \"kubernetes.io/projected/1980967b-05ed-409d-9774-a946cecfda9c-kube-api-access-8f77t\") pod \"1980967b-05ed-409d-9774-a946cecfda9c\" (UID: \"1980967b-05ed-409d-9774-a946cecfda9c\") " Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.536640 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1980967b-05ed-409d-9774-a946cecfda9c-utilities" (OuterVolumeSpecName: "utilities") pod "1980967b-05ed-409d-9774-a946cecfda9c" (UID: "1980967b-05ed-409d-9774-a946cecfda9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.538920 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1980967b-05ed-409d-9774-a946cecfda9c-kube-api-access-8f77t" (OuterVolumeSpecName: "kube-api-access-8f77t") pod "1980967b-05ed-409d-9774-a946cecfda9c" (UID: "1980967b-05ed-409d-9774-a946cecfda9c"). InnerVolumeSpecName "kube-api-access-8f77t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.636230 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f77t\" (UniqueName: \"kubernetes.io/projected/1980967b-05ed-409d-9774-a946cecfda9c-kube-api-access-8f77t\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.636659 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1980967b-05ed-409d-9774-a946cecfda9c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.870088 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab87f796-68ff-4e78-b131-813998e00539" path="/var/lib/kubelet/pods/ab87f796-68ff-4e78-b131-813998e00539/volumes" Jan 31 04:47:01 crc kubenswrapper[4832]: I0131 04:47:01.871204 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfe18813-bbee-404e-a100-ea164dcd83ec" path="/var/lib/kubelet/pods/cfe18813-bbee-404e-a100-ea164dcd83ec/volumes" Jan 31 04:47:02 crc kubenswrapper[4832]: I0131 04:47:02.154012 4832 generic.go:334] "Generic (PLEG): container finished" podID="8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" containerID="0f96b522559d4a8f8f5ea347eb647e323062922f4005737781efe3deeb8dd973" exitCode=0 Jan 31 04:47:02 crc kubenswrapper[4832]: I0131 04:47:02.154090 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdq5m" event={"ID":"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0","Type":"ContainerDied","Data":"0f96b522559d4a8f8f5ea347eb647e323062922f4005737781efe3deeb8dd973"} Jan 31 04:47:02 crc kubenswrapper[4832]: I0131 04:47:02.159755 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dhxc2" event={"ID":"1980967b-05ed-409d-9774-a946cecfda9c","Type":"ContainerDied","Data":"7f48aae9ae7933d0e072721f46ef17d5a758b4fe1bc64c4a07f8cd343471876d"} Jan 31 04:47:02 crc kubenswrapper[4832]: I0131 04:47:02.159810 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dhxc2" Jan 31 04:47:02 crc kubenswrapper[4832]: I0131 04:47:02.160202 4832 scope.go:117] "RemoveContainer" containerID="4a95f60cdf4b28423b8ab588d989ff9daae24b4ad47751b790b6f16d3768f5c7" Jan 31 04:47:02 crc kubenswrapper[4832]: I0131 04:47:02.177487 4832 scope.go:117] "RemoveContainer" containerID="0885414ce46c2e06092d9034b0878aa6f6ee4d9112e92258d03b38262f3c25a8" Jan 31 04:47:02 crc kubenswrapper[4832]: I0131 04:47:02.202792 4832 scope.go:117] "RemoveContainer" containerID="0ca71dc4249a7e4d56db10712241781c78e32e8e1b61b57f033225c94171b07b" Jan 31 04:47:02 crc kubenswrapper[4832]: I0131 04:47:02.395427 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1980967b-05ed-409d-9774-a946cecfda9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1980967b-05ed-409d-9774-a946cecfda9c" (UID: "1980967b-05ed-409d-9774-a946cecfda9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:47:02 crc kubenswrapper[4832]: I0131 04:47:02.447756 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1980967b-05ed-409d-9774-a946cecfda9c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:02 crc kubenswrapper[4832]: I0131 04:47:02.494034 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dhxc2"] Jan 31 04:47:02 crc kubenswrapper[4832]: I0131 04:47:02.498264 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dhxc2"] Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.008895 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.008970 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.063019 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.147335 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.171437 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bdq5m" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.172134 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bdq5m" event={"ID":"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0","Type":"ContainerDied","Data":"818a3dae68c34c0cf50d0a5042699ecb7729f38128be2e15ed08f45ec22b0270"} Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.172196 4832 scope.go:117] "RemoveContainer" containerID="0f96b522559d4a8f8f5ea347eb647e323062922f4005737781efe3deeb8dd973" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.189890 4832 scope.go:117] "RemoveContainer" containerID="7f535b7b02cbe50ec4c2356bc806e796e80010416f3355b9a67e87131d249458" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.220892 4832 scope.go:117] "RemoveContainer" containerID="803033c2f403e649a7c341aaacb7718ae628887e544c113e75ccda036bd25330" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.224194 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.261373 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-utilities\") pod \"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0\" (UID: \"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0\") " Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.261536 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-catalog-content\") pod \"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0\" (UID: \"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0\") " Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.261618 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wl4c\" (UniqueName: \"kubernetes.io/projected/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-kube-api-access-9wl4c\") pod \"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0\" (UID: \"8ffeb3e3-9f51-40a2-9c02-988853dc2bc0\") " Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.263218 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-utilities" (OuterVolumeSpecName: "utilities") pod "8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" (UID: "8ffeb3e3-9f51-40a2-9c02-988853dc2bc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.280754 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-kube-api-access-9wl4c" (OuterVolumeSpecName: "kube-api-access-9wl4c") pod "8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" (UID: "8ffeb3e3-9f51-40a2-9c02-988853dc2bc0"). InnerVolumeSpecName "kube-api-access-9wl4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.364027 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.364730 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wl4c\" (UniqueName: \"kubernetes.io/projected/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-kube-api-access-9wl4c\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.393913 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" (UID: "8ffeb3e3-9f51-40a2-9c02-988853dc2bc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.465944 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.505442 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bdq5m"] Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.508523 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bdq5m"] Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.891881 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1980967b-05ed-409d-9774-a946cecfda9c" path="/var/lib/kubelet/pods/1980967b-05ed-409d-9774-a946cecfda9c/volumes" Jan 31 04:47:03 crc kubenswrapper[4832]: I0131 04:47:03.892976 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" path="/var/lib/kubelet/pods/8ffeb3e3-9f51-40a2-9c02-988853dc2bc0/volumes" Jan 31 04:47:06 crc kubenswrapper[4832]: I0131 04:47:06.240891 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:47:06 crc kubenswrapper[4832]: I0131 04:47:06.240944 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:47:06 crc kubenswrapper[4832]: I0131 04:47:06.285522 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:47:07 crc kubenswrapper[4832]: I0131 04:47:07.257743 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:47:08 crc kubenswrapper[4832]: I0131 04:47:08.426996 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" podUID="c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" containerName="oauth-openshift" containerID="cri-o://b3ddc747d7f38889268a5486cf9546baebfacc80f51b4ad20d6814251a166e44" gracePeriod=15 Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.225186 4832 generic.go:334] "Generic (PLEG): container finished" podID="c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" containerID="b3ddc747d7f38889268a5486cf9546baebfacc80f51b4ad20d6814251a166e44" exitCode=0 Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.225314 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" event={"ID":"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0","Type":"ContainerDied","Data":"b3ddc747d7f38889268a5486cf9546baebfacc80f51b4ad20d6814251a166e44"} Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.370831 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.568123 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-login\") pod \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.568180 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wxcq\" (UniqueName: \"kubernetes.io/projected/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-kube-api-access-4wxcq\") pod \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.568211 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-idp-0-file-data\") pod \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.568255 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-audit-dir\") pod \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.568306 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-audit-policies\") pod \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.568326 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-serving-cert\") pod \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.568403 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" (UID: "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.569301 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" (UID: "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.569330 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-provider-selection\") pod \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.569358 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-ocp-branding-template\") pod \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.569384 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-session\") pod \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.569419 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-error\") pod \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.569818 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-cliconfig\") pod \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.569849 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-service-ca\") pod \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.569874 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-router-certs\") pod \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.569914 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-trusted-ca-bundle\") pod \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\" (UID: \"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0\") " Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.570235 4832 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.570251 4832 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.570912 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" (UID: "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.571343 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" (UID: "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.571817 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" (UID: "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.574377 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-kube-api-access-4wxcq" (OuterVolumeSpecName: "kube-api-access-4wxcq") pod "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" (UID: "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0"). InnerVolumeSpecName "kube-api-access-4wxcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.574504 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" (UID: "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.574652 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" (UID: "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.574805 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" (UID: "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.576008 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" (UID: "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.576880 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" (UID: "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.577112 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" (UID: "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.577540 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" (UID: "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.577734 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" (UID: "c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.671677 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.671719 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.671734 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.671747 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.671760 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.671769 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wxcq\" (UniqueName: \"kubernetes.io/projected/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-kube-api-access-4wxcq\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.671782 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.671793 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.671805 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.671818 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.671827 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:09 crc kubenswrapper[4832]: I0131 04:47:09.671838 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:10 crc kubenswrapper[4832]: I0131 04:47:10.231598 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" event={"ID":"c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0","Type":"ContainerDied","Data":"23de51ed6aec4c15fb52121129457bbebf7f4dcf36f0b25afdbf46e9426fd226"} Jan 31 04:47:10 crc kubenswrapper[4832]: I0131 04:47:10.231673 4832 scope.go:117] "RemoveContainer" containerID="b3ddc747d7f38889268a5486cf9546baebfacc80f51b4ad20d6814251a166e44" Jan 31 04:47:10 crc kubenswrapper[4832]: I0131 04:47:10.231680 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-94sbp" Jan 31 04:47:10 crc kubenswrapper[4832]: I0131 04:47:10.257860 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-94sbp"] Jan 31 04:47:10 crc kubenswrapper[4832]: I0131 04:47:10.272494 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-94sbp"] Jan 31 04:47:11 crc kubenswrapper[4832]: I0131 04:47:11.865244 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" path="/var/lib/kubelet/pods/c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0/volumes" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.645772 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-76fc545986-grmcv"] Jan 31 04:47:14 crc kubenswrapper[4832]: E0131 04:47:14.646442 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1980967b-05ed-409d-9774-a946cecfda9c" containerName="extract-utilities" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.646463 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1980967b-05ed-409d-9774-a946cecfda9c" containerName="extract-utilities" Jan 31 04:47:14 crc kubenswrapper[4832]: E0131 04:47:14.646485 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" containerName="registry-server" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.646498 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" containerName="registry-server" Jan 31 04:47:14 crc kubenswrapper[4832]: E0131 04:47:14.646522 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab87f796-68ff-4e78-b131-813998e00539" containerName="extract-content" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.646535 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab87f796-68ff-4e78-b131-813998e00539" containerName="extract-content" Jan 31 04:47:14 crc kubenswrapper[4832]: E0131 04:47:14.646555 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab87f796-68ff-4e78-b131-813998e00539" containerName="registry-server" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.646595 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab87f796-68ff-4e78-b131-813998e00539" containerName="registry-server" Jan 31 04:47:14 crc kubenswrapper[4832]: E0131 04:47:14.646613 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1980967b-05ed-409d-9774-a946cecfda9c" containerName="registry-server" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.646626 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1980967b-05ed-409d-9774-a946cecfda9c" containerName="registry-server" Jan 31 04:47:14 crc kubenswrapper[4832]: E0131 04:47:14.646652 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" containerName="oauth-openshift" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.646664 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" containerName="oauth-openshift" Jan 31 04:47:14 crc kubenswrapper[4832]: E0131 04:47:14.646687 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" containerName="extract-content" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.646699 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" containerName="extract-content" Jan 31 04:47:14 crc kubenswrapper[4832]: E0131 04:47:14.646723 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab87f796-68ff-4e78-b131-813998e00539" containerName="extract-utilities" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.646737 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab87f796-68ff-4e78-b131-813998e00539" containerName="extract-utilities" Jan 31 04:47:14 crc kubenswrapper[4832]: E0131 04:47:14.646753 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1980967b-05ed-409d-9774-a946cecfda9c" containerName="extract-content" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.646766 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1980967b-05ed-409d-9774-a946cecfda9c" containerName="extract-content" Jan 31 04:47:14 crc kubenswrapper[4832]: E0131 04:47:14.646787 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe18813-bbee-404e-a100-ea164dcd83ec" containerName="registry-server" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.646799 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe18813-bbee-404e-a100-ea164dcd83ec" containerName="registry-server" Jan 31 04:47:14 crc kubenswrapper[4832]: E0131 04:47:14.646820 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" containerName="extract-utilities" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.646832 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" containerName="extract-utilities" Jan 31 04:47:14 crc kubenswrapper[4832]: E0131 04:47:14.646851 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe18813-bbee-404e-a100-ea164dcd83ec" containerName="extract-content" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.646863 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe18813-bbee-404e-a100-ea164dcd83ec" containerName="extract-content" Jan 31 04:47:14 crc kubenswrapper[4832]: E0131 04:47:14.646882 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe18813-bbee-404e-a100-ea164dcd83ec" containerName="extract-utilities" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.646894 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe18813-bbee-404e-a100-ea164dcd83ec" containerName="extract-utilities" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.647090 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe18813-bbee-404e-a100-ea164dcd83ec" containerName="registry-server" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.647112 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab87f796-68ff-4e78-b131-813998e00539" containerName="registry-server" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.647134 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ffeb3e3-9f51-40a2-9c02-988853dc2bc0" containerName="registry-server" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.647153 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1bfdfa4-81a2-460c-9eb5-4c6723f61ae0" containerName="oauth-openshift" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.647173 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1980967b-05ed-409d-9774-a946cecfda9c" containerName="registry-server" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.647856 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.657483 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.657680 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.657749 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.657483 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.657495 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.658100 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.658348 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.658359 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.658462 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.658361 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.658604 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.658704 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.673002 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.680472 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76fc545986-grmcv"] Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.681822 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.684638 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.745278 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.745367 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.745397 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-service-ca\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.745432 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-session\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.745690 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34e8d87b-7723-45ce-9c8d-805f63d73513-audit-dir\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.745749 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-user-template-login\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.745779 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9dql\" (UniqueName: \"kubernetes.io/projected/34e8d87b-7723-45ce-9c8d-805f63d73513-kube-api-access-h9dql\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.745812 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34e8d87b-7723-45ce-9c8d-805f63d73513-audit-policies\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.745837 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.746081 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.746138 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-router-certs\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.746176 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-user-template-error\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.746234 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.746255 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.847554 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.847668 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.847759 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.847817 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.847850 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-service-ca\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.847894 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-session\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.847988 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34e8d87b-7723-45ce-9c8d-805f63d73513-audit-dir\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.848026 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34e8d87b-7723-45ce-9c8d-805f63d73513-audit-policies\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.848059 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-user-template-login\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.848089 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9dql\" (UniqueName: \"kubernetes.io/projected/34e8d87b-7723-45ce-9c8d-805f63d73513-kube-api-access-h9dql\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.848125 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.848171 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.848236 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-router-certs\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.848281 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-user-template-error\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.849212 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34e8d87b-7723-45ce-9c8d-805f63d73513-audit-dir\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.850823 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.851928 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-service-ca\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.852285 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34e8d87b-7723-45ce-9c8d-805f63d73513-audit-policies\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.852481 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.858797 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.859756 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-router-certs\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.868539 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-session\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.869234 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.870776 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.872130 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-user-template-error\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.872177 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-user-template-login\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.874312 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34e8d87b-7723-45ce-9c8d-805f63d73513-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.889857 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9dql\" (UniqueName: \"kubernetes.io/projected/34e8d87b-7723-45ce-9c8d-805f63d73513-kube-api-access-h9dql\") pod \"oauth-openshift-76fc545986-grmcv\" (UID: \"34e8d87b-7723-45ce-9c8d-805f63d73513\") " pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:14 crc kubenswrapper[4832]: I0131 04:47:14.986604 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:15 crc kubenswrapper[4832]: I0131 04:47:15.411074 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76fc545986-grmcv"] Jan 31 04:47:16 crc kubenswrapper[4832]: I0131 04:47:16.300548 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" event={"ID":"34e8d87b-7723-45ce-9c8d-805f63d73513","Type":"ContainerStarted","Data":"4b1651f11fc9f57ca93f86d12d26a5bc1e93f2f55f081117f23bc326b8930527"} Jan 31 04:47:16 crc kubenswrapper[4832]: I0131 04:47:16.300896 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:16 crc kubenswrapper[4832]: I0131 04:47:16.300907 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" event={"ID":"34e8d87b-7723-45ce-9c8d-805f63d73513","Type":"ContainerStarted","Data":"07eef1ffff68ee7e972144d92dca314437b4a2d9aa748e2eebe1a18ed6ca536d"} Jan 31 04:47:16 crc kubenswrapper[4832]: I0131 04:47:16.305555 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" Jan 31 04:47:16 crc kubenswrapper[4832]: I0131 04:47:16.325416 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-76fc545986-grmcv" podStartSLOduration=33.325399793 podStartE2EDuration="33.325399793s" podCreationTimestamp="2026-01-31 04:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:47:16.323619778 +0000 UTC m=+245.272441463" watchObservedRunningTime="2026-01-31 04:47:16.325399793 +0000 UTC m=+245.274221478" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.635289 4832 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.636728 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140" gracePeriod=15 Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.636822 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882" gracePeriod=15 Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.636771 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695" gracePeriod=15 Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.636805 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f" gracePeriod=15 Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.636771 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1" gracePeriod=15 Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.641145 4832 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 04:47:18 crc kubenswrapper[4832]: E0131 04:47:18.641537 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.641554 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 04:47:18 crc kubenswrapper[4832]: E0131 04:47:18.641578 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.641585 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 04:47:18 crc kubenswrapper[4832]: E0131 04:47:18.641594 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.641599 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 04:47:18 crc kubenswrapper[4832]: E0131 04:47:18.641607 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.641614 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 04:47:18 crc kubenswrapper[4832]: E0131 04:47:18.641622 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.641629 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 04:47:18 crc kubenswrapper[4832]: E0131 04:47:18.641638 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.641644 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 04:47:18 crc kubenswrapper[4832]: E0131 04:47:18.641656 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.641662 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.641759 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.641770 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.641776 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.641786 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.641822 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.641832 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 04:47:18 crc kubenswrapper[4832]: E0131 04:47:18.643608 4832 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.726767 4832 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.727712 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.728975 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.731333 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.731480 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.731695 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: E0131 04:47:18.743336 4832 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.136:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.833724 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.833836 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.833881 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.833944 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.834018 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.834056 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.834111 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.834147 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.834295 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.834360 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.834409 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.935715 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.935842 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.935872 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.935934 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.936210 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.936266 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.936392 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.936470 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.936538 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:18 crc kubenswrapper[4832]: I0131 04:47:18.936865 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:19 crc kubenswrapper[4832]: I0131 04:47:19.045142 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:19 crc kubenswrapper[4832]: W0131 04:47:19.075810 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-49f1cfd108d45dbaa07c75c2b5220ad7587d71781394b0ed650a6ba4a27f14f1 WatchSource:0}: Error finding container 49f1cfd108d45dbaa07c75c2b5220ad7587d71781394b0ed650a6ba4a27f14f1: Status 404 returned error can't find the container with id 49f1cfd108d45dbaa07c75c2b5220ad7587d71781394b0ed650a6ba4a27f14f1 Jan 31 04:47:19 crc kubenswrapper[4832]: E0131 04:47:19.081898 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.136:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fb768be7a089d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 04:47:19.080044701 +0000 UTC m=+248.028866396,LastTimestamp:2026-01-31 04:47:19.080044701 +0000 UTC m=+248.028866396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 04:47:19 crc kubenswrapper[4832]: I0131 04:47:19.316735 4832 generic.go:334] "Generic (PLEG): container finished" podID="8df36146-a4dc-4f8d-830a-aebc8933d8af" containerID="1cae4911dd2655fc866729338b3c732fe76a6f853e3aae8d4993cda25fedec1c" exitCode=0 Jan 31 04:47:19 crc kubenswrapper[4832]: I0131 04:47:19.316853 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8df36146-a4dc-4f8d-830a-aebc8933d8af","Type":"ContainerDied","Data":"1cae4911dd2655fc866729338b3c732fe76a6f853e3aae8d4993cda25fedec1c"} Jan 31 04:47:19 crc kubenswrapper[4832]: I0131 04:47:19.318312 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:19 crc kubenswrapper[4832]: I0131 04:47:19.318881 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df36146-a4dc-4f8d-830a-aebc8933d8af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:19 crc kubenswrapper[4832]: I0131 04:47:19.321002 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 04:47:19 crc kubenswrapper[4832]: I0131 04:47:19.322682 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 04:47:19 crc kubenswrapper[4832]: I0131 04:47:19.324433 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695" exitCode=0 Jan 31 04:47:19 crc kubenswrapper[4832]: I0131 04:47:19.324465 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882" exitCode=0 Jan 31 04:47:19 crc kubenswrapper[4832]: I0131 04:47:19.324477 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1" exitCode=0 Jan 31 04:47:19 crc kubenswrapper[4832]: I0131 04:47:19.324488 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f" exitCode=2 Jan 31 04:47:19 crc kubenswrapper[4832]: I0131 04:47:19.324493 4832 scope.go:117] "RemoveContainer" containerID="84af550e92cb4da9e87eaae1512d7737397f50d90f2efe3748ed2247c0a0b220" Jan 31 04:47:19 crc kubenswrapper[4832]: I0131 04:47:19.325671 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"49f1cfd108d45dbaa07c75c2b5220ad7587d71781394b0ed650a6ba4a27f14f1"} Jan 31 04:47:20 crc kubenswrapper[4832]: I0131 04:47:20.332401 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0f0216426da5b3b19e50e04a84dd2f35b28d26553a5c134ffa538e4a070cc49d"} Jan 31 04:47:20 crc kubenswrapper[4832]: I0131 04:47:20.333198 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df36146-a4dc-4f8d-830a-aebc8933d8af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:20 crc kubenswrapper[4832]: E0131 04:47:20.333236 4832 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.136:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:20 crc kubenswrapper[4832]: I0131 04:47:20.336284 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 04:47:20 crc kubenswrapper[4832]: I0131 04:47:20.596171 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:47:20 crc kubenswrapper[4832]: I0131 04:47:20.597166 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df36146-a4dc-4f8d-830a-aebc8933d8af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:20 crc kubenswrapper[4832]: I0131 04:47:20.662453 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8df36146-a4dc-4f8d-830a-aebc8933d8af-kube-api-access\") pod \"8df36146-a4dc-4f8d-830a-aebc8933d8af\" (UID: \"8df36146-a4dc-4f8d-830a-aebc8933d8af\") " Jan 31 04:47:20 crc kubenswrapper[4832]: I0131 04:47:20.662617 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8df36146-a4dc-4f8d-830a-aebc8933d8af-kubelet-dir\") pod \"8df36146-a4dc-4f8d-830a-aebc8933d8af\" (UID: \"8df36146-a4dc-4f8d-830a-aebc8933d8af\") " Jan 31 04:47:20 crc kubenswrapper[4832]: I0131 04:47:20.662647 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8df36146-a4dc-4f8d-830a-aebc8933d8af-var-lock\") pod \"8df36146-a4dc-4f8d-830a-aebc8933d8af\" (UID: \"8df36146-a4dc-4f8d-830a-aebc8933d8af\") " Jan 31 04:47:20 crc kubenswrapper[4832]: I0131 04:47:20.662784 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8df36146-a4dc-4f8d-830a-aebc8933d8af-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8df36146-a4dc-4f8d-830a-aebc8933d8af" (UID: "8df36146-a4dc-4f8d-830a-aebc8933d8af"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:47:20 crc kubenswrapper[4832]: I0131 04:47:20.662802 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8df36146-a4dc-4f8d-830a-aebc8933d8af-var-lock" (OuterVolumeSpecName: "var-lock") pod "8df36146-a4dc-4f8d-830a-aebc8933d8af" (UID: "8df36146-a4dc-4f8d-830a-aebc8933d8af"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:47:20 crc kubenswrapper[4832]: I0131 04:47:20.663162 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8df36146-a4dc-4f8d-830a-aebc8933d8af-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:20 crc kubenswrapper[4832]: I0131 04:47:20.663184 4832 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8df36146-a4dc-4f8d-830a-aebc8933d8af-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:20 crc kubenswrapper[4832]: I0131 04:47:20.673769 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df36146-a4dc-4f8d-830a-aebc8933d8af-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8df36146-a4dc-4f8d-830a-aebc8933d8af" (UID: "8df36146-a4dc-4f8d-830a-aebc8933d8af"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:47:20 crc kubenswrapper[4832]: I0131 04:47:20.779109 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8df36146-a4dc-4f8d-830a-aebc8933d8af-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.010143 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.011291 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.011900 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.012101 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df36146-a4dc-4f8d-830a-aebc8933d8af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.082582 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.082689 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.082737 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.082785 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.082809 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.082846 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.083149 4832 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.083167 4832 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.083177 4832 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.353046 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.353923 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140" exitCode=0 Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.354060 4832 scope.go:117] "RemoveContainer" containerID="8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.354089 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.358107 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8df36146-a4dc-4f8d-830a-aebc8933d8af","Type":"ContainerDied","Data":"ab8d39f98712775cd1e3fc0dce86dcb67a8b1e45bafb5494f5051da0ffa38be2"} Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.358146 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab8d39f98712775cd1e3fc0dce86dcb67a8b1e45bafb5494f5051da0ffa38be2" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.358141 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 04:47:21 crc kubenswrapper[4832]: E0131 04:47:21.359530 4832 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.136:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.370397 4832 scope.go:117] "RemoveContainer" containerID="57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.381545 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.381991 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df36146-a4dc-4f8d-830a-aebc8933d8af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.382254 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.382453 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df36146-a4dc-4f8d-830a-aebc8933d8af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.387635 4832 scope.go:117] "RemoveContainer" containerID="3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.402455 4832 scope.go:117] "RemoveContainer" containerID="44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.416250 4832 scope.go:117] "RemoveContainer" containerID="81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.431482 4832 scope.go:117] "RemoveContainer" containerID="3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.449167 4832 scope.go:117] "RemoveContainer" containerID="8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695" Jan 31 04:47:21 crc kubenswrapper[4832]: E0131 04:47:21.449583 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\": container with ID starting with 8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695 not found: ID does not exist" containerID="8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.449642 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695"} err="failed to get container status \"8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\": rpc error: code = NotFound desc = could not find container \"8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695\": container with ID starting with 8ad623a20df30533b63e18553dc989af9a7e3169c835d115fbcd2d054ff26695 not found: ID does not exist" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.449680 4832 scope.go:117] "RemoveContainer" containerID="57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882" Jan 31 04:47:21 crc kubenswrapper[4832]: E0131 04:47:21.450207 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\": container with ID starting with 57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882 not found: ID does not exist" containerID="57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.450244 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882"} err="failed to get container status \"57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\": rpc error: code = NotFound desc = could not find container \"57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882\": container with ID starting with 57eb5c227602e06b81ced9f37653d7fb00a2ad0b2a78bed5079626e476ce5882 not found: ID does not exist" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.450312 4832 scope.go:117] "RemoveContainer" containerID="3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1" Jan 31 04:47:21 crc kubenswrapper[4832]: E0131 04:47:21.450863 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\": container with ID starting with 3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1 not found: ID does not exist" containerID="3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.450891 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1"} err="failed to get container status \"3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\": rpc error: code = NotFound desc = could not find container \"3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1\": container with ID starting with 3a66004f568b4f046a26a3e42b147dd5ef090c166180cc39d69db62c4da319c1 not found: ID does not exist" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.450913 4832 scope.go:117] "RemoveContainer" containerID="44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f" Jan 31 04:47:21 crc kubenswrapper[4832]: E0131 04:47:21.451253 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\": container with ID starting with 44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f not found: ID does not exist" containerID="44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.451292 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f"} err="failed to get container status \"44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\": rpc error: code = NotFound desc = could not find container \"44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f\": container with ID starting with 44e0c1eed09ea1234b05e91bcc5ad2a4c445faed58cb0bd8f61ab83c11e50f9f not found: ID does not exist" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.451311 4832 scope.go:117] "RemoveContainer" containerID="81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140" Jan 31 04:47:21 crc kubenswrapper[4832]: E0131 04:47:21.451782 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\": container with ID starting with 81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140 not found: ID does not exist" containerID="81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.451814 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140"} err="failed to get container status \"81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\": rpc error: code = NotFound desc = could not find container \"81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140\": container with ID starting with 81156df1ffe517a642a98e3518d1bba2afcb3030adfc308a8212a2bf65d3e140 not found: ID does not exist" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.451832 4832 scope.go:117] "RemoveContainer" containerID="3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7" Jan 31 04:47:21 crc kubenswrapper[4832]: E0131 04:47:21.452357 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\": container with ID starting with 3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7 not found: ID does not exist" containerID="3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.452640 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7"} err="failed to get container status \"3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\": rpc error: code = NotFound desc = could not find container \"3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7\": container with ID starting with 3c1e04b8fa2caebd324f445cc11d6d0c5b2753812744606303ffe2626bad94f7 not found: ID does not exist" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.862423 4832 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.862795 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df36146-a4dc-4f8d-830a-aebc8933d8af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:21 crc kubenswrapper[4832]: I0131 04:47:21.867936 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 31 04:47:22 crc kubenswrapper[4832]: E0131 04:47:22.870385 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:22 crc kubenswrapper[4832]: E0131 04:47:22.870688 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:22 crc kubenswrapper[4832]: E0131 04:47:22.871253 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:22 crc kubenswrapper[4832]: E0131 04:47:22.872007 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:22 crc kubenswrapper[4832]: E0131 04:47:22.872427 4832 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:22 crc kubenswrapper[4832]: I0131 04:47:22.872468 4832 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 31 04:47:22 crc kubenswrapper[4832]: E0131 04:47:22.872797 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="200ms" Jan 31 04:47:23 crc kubenswrapper[4832]: E0131 04:47:23.074351 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="400ms" Jan 31 04:47:23 crc kubenswrapper[4832]: E0131 04:47:23.475383 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="800ms" Jan 31 04:47:24 crc kubenswrapper[4832]: E0131 04:47:24.277043 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="1.6s" Jan 31 04:47:25 crc kubenswrapper[4832]: E0131 04:47:25.878974 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="3.2s" Jan 31 04:47:26 crc kubenswrapper[4832]: E0131 04:47:26.298334 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.136:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fb768be7a089d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 04:47:19.080044701 +0000 UTC m=+248.028866396,LastTimestamp:2026-01-31 04:47:19.080044701 +0000 UTC m=+248.028866396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 04:47:29 crc kubenswrapper[4832]: E0131 04:47:29.079925 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.136:6443: connect: connection refused" interval="6.4s" Jan 31 04:47:30 crc kubenswrapper[4832]: I0131 04:47:30.859399 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:30 crc kubenswrapper[4832]: I0131 04:47:30.860895 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df36146-a4dc-4f8d-830a-aebc8933d8af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:30 crc kubenswrapper[4832]: I0131 04:47:30.881757 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="285111dc-cc04-4ea2-837a-ae8ca5028ee3" Jan 31 04:47:30 crc kubenswrapper[4832]: I0131 04:47:30.881809 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="285111dc-cc04-4ea2-837a-ae8ca5028ee3" Jan 31 04:47:30 crc kubenswrapper[4832]: E0131 04:47:30.882908 4832 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:30 crc kubenswrapper[4832]: I0131 04:47:30.883998 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:30 crc kubenswrapper[4832]: W0131 04:47:30.909316 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-24de4368e2345699670e4a79c460c287fae58b22ab21f95bc303ef35ef081753 WatchSource:0}: Error finding container 24de4368e2345699670e4a79c460c287fae58b22ab21f95bc303ef35ef081753: Status 404 returned error can't find the container with id 24de4368e2345699670e4a79c460c287fae58b22ab21f95bc303ef35ef081753 Jan 31 04:47:31 crc kubenswrapper[4832]: I0131 04:47:31.418538 4832 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="0cb321f56e2ee9d621450003830c53be1a22626c90af2f401b391568436b8942" exitCode=0 Jan 31 04:47:31 crc kubenswrapper[4832]: I0131 04:47:31.418608 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"0cb321f56e2ee9d621450003830c53be1a22626c90af2f401b391568436b8942"} Jan 31 04:47:31 crc kubenswrapper[4832]: I0131 04:47:31.418652 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"24de4368e2345699670e4a79c460c287fae58b22ab21f95bc303ef35ef081753"} Jan 31 04:47:31 crc kubenswrapper[4832]: I0131 04:47:31.418958 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="285111dc-cc04-4ea2-837a-ae8ca5028ee3" Jan 31 04:47:31 crc kubenswrapper[4832]: I0131 04:47:31.418974 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="285111dc-cc04-4ea2-837a-ae8ca5028ee3" Jan 31 04:47:31 crc kubenswrapper[4832]: I0131 04:47:31.419424 4832 status_manager.go:851] "Failed to get status for pod" podUID="8df36146-a4dc-4f8d-830a-aebc8933d8af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" Jan 31 04:47:31 crc kubenswrapper[4832]: E0131 04:47:31.419574 4832 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.136:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:32 crc kubenswrapper[4832]: I0131 04:47:32.431477 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"42d2b61f50cd024a469a7c4afbb2ffabcbe60047a79eadd3e4bae00aa45eeda3"} Jan 31 04:47:32 crc kubenswrapper[4832]: I0131 04:47:32.431524 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"24c7f19682d375bb03adf0afb702f8ab6567ca3fb292ff880c66606e4bf511b0"} Jan 31 04:47:32 crc kubenswrapper[4832]: I0131 04:47:32.431534 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e49d76c006e4dede36a8e8f060064a79358b14ba81194052b60f6bae31b59708"} Jan 31 04:47:32 crc kubenswrapper[4832]: I0131 04:47:32.431543 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0681e5ae3cf848cedff767775958af0047bb945a7b6e0c4e5824aa0eccaa4b57"} Jan 31 04:47:33 crc kubenswrapper[4832]: I0131 04:47:33.444792 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9ba274bc28f558a0c82b39aea045974fae189a168e50e42e508c0197f8df07a0"} Jan 31 04:47:33 crc kubenswrapper[4832]: I0131 04:47:33.445337 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:33 crc kubenswrapper[4832]: I0131 04:47:33.445214 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="285111dc-cc04-4ea2-837a-ae8ca5028ee3" Jan 31 04:47:33 crc kubenswrapper[4832]: I0131 04:47:33.445375 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="285111dc-cc04-4ea2-837a-ae8ca5028ee3" Jan 31 04:47:33 crc kubenswrapper[4832]: I0131 04:47:33.448412 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 04:47:33 crc kubenswrapper[4832]: I0131 04:47:33.448483 4832 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb" exitCode=1 Jan 31 04:47:33 crc kubenswrapper[4832]: I0131 04:47:33.448529 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb"} Jan 31 04:47:33 crc kubenswrapper[4832]: I0131 04:47:33.449244 4832 scope.go:117] "RemoveContainer" containerID="677a275f87c86064abdb43682b42e4583edc0fe62e86f32ba162e44914ee87fb" Jan 31 04:47:34 crc kubenswrapper[4832]: I0131 04:47:34.457993 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 04:47:34 crc kubenswrapper[4832]: I0131 04:47:34.459208 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a3bac77ebae15392f560efac06e9b25d59cfcd9772d712f211a159004c5766d5"} Jan 31 04:47:35 crc kubenswrapper[4832]: I0131 04:47:35.884309 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:35 crc kubenswrapper[4832]: I0131 04:47:35.884378 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:35 crc kubenswrapper[4832]: I0131 04:47:35.890112 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:38 crc kubenswrapper[4832]: I0131 04:47:38.457238 4832 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:38 crc kubenswrapper[4832]: I0131 04:47:38.484927 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="285111dc-cc04-4ea2-837a-ae8ca5028ee3" Jan 31 04:47:38 crc kubenswrapper[4832]: I0131 04:47:38.484958 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="285111dc-cc04-4ea2-837a-ae8ca5028ee3" Jan 31 04:47:38 crc kubenswrapper[4832]: I0131 04:47:38.490061 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:47:38 crc kubenswrapper[4832]: I0131 04:47:38.492956 4832 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6a29d92d-a100-4d26-aacb-5d05e8c1a178" Jan 31 04:47:39 crc kubenswrapper[4832]: I0131 04:47:39.479466 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:47:39 crc kubenswrapper[4832]: I0131 04:47:39.479859 4832 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 31 04:47:39 crc kubenswrapper[4832]: I0131 04:47:39.480160 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 31 04:47:39 crc kubenswrapper[4832]: I0131 04:47:39.491358 4832 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="285111dc-cc04-4ea2-837a-ae8ca5028ee3" Jan 31 04:47:39 crc kubenswrapper[4832]: I0131 04:47:39.491397 4832 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="285111dc-cc04-4ea2-837a-ae8ca5028ee3" Jan 31 04:47:40 crc kubenswrapper[4832]: I0131 04:47:40.047518 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:47:41 crc kubenswrapper[4832]: I0131 04:47:41.878614 4832 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6a29d92d-a100-4d26-aacb-5d05e8c1a178" Jan 31 04:47:42 crc kubenswrapper[4832]: I0131 04:47:42.235868 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:47:42 crc kubenswrapper[4832]: I0131 04:47:42.236018 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:47:42 crc kubenswrapper[4832]: I0131 04:47:42.238530 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 04:47:42 crc kubenswrapper[4832]: I0131 04:47:42.239020 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 04:47:42 crc kubenswrapper[4832]: I0131 04:47:42.247903 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:47:42 crc kubenswrapper[4832]: I0131 04:47:42.256526 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:47:42 crc kubenswrapper[4832]: I0131 04:47:42.337650 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:47:42 crc kubenswrapper[4832]: I0131 04:47:42.338194 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:47:42 crc kubenswrapper[4832]: I0131 04:47:42.340110 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 04:47:42 crc kubenswrapper[4832]: I0131 04:47:42.351506 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 04:47:42 crc kubenswrapper[4832]: I0131 04:47:42.366346 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:47:42 crc kubenswrapper[4832]: I0131 04:47:42.366760 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:47:42 crc kubenswrapper[4832]: I0131 04:47:42.385029 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:47:42 crc kubenswrapper[4832]: I0131 04:47:42.404365 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 04:47:42 crc kubenswrapper[4832]: I0131 04:47:42.600236 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 04:47:42 crc kubenswrapper[4832]: W0131 04:47:42.791430 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-6d9ccd041cf36677143f731fbe6759113a5a7b2c6007c284a8f5715d4a91a69a WatchSource:0}: Error finding container 6d9ccd041cf36677143f731fbe6759113a5a7b2c6007c284a8f5715d4a91a69a: Status 404 returned error can't find the container with id 6d9ccd041cf36677143f731fbe6759113a5a7b2c6007c284a8f5715d4a91a69a Jan 31 04:47:42 crc kubenswrapper[4832]: W0131 04:47:42.900040 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-75d2baf41b3a1f1f56842aeebf723410b2fbda9ad40670ffb5478ed27cbb7111 WatchSource:0}: Error finding container 75d2baf41b3a1f1f56842aeebf723410b2fbda9ad40670ffb5478ed27cbb7111: Status 404 returned error can't find the container with id 75d2baf41b3a1f1f56842aeebf723410b2fbda9ad40670ffb5478ed27cbb7111 Jan 31 04:47:43 crc kubenswrapper[4832]: I0131 04:47:43.525830 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"80431f9f00d62e659fa4ffab3a19bb765b6d0f3cd4e50fd68666ae6798fa0228"} Jan 31 04:47:43 crc kubenswrapper[4832]: I0131 04:47:43.526477 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6d9ccd041cf36677143f731fbe6759113a5a7b2c6007c284a8f5715d4a91a69a"} Jan 31 04:47:43 crc kubenswrapper[4832]: I0131 04:47:43.528047 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3724e6534ed883ae6d531d1c806d0efb2c9f44e319054049bb545e17b426b93e"} Jan 31 04:47:43 crc kubenswrapper[4832]: I0131 04:47:43.528097 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4fc1712fe61d86abb091d70548b48af559a4f5c61025bf2eb8962ce5a28bb2e0"} Jan 31 04:47:43 crc kubenswrapper[4832]: I0131 04:47:43.528236 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:47:43 crc kubenswrapper[4832]: I0131 04:47:43.530118 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6a931b9bd9d3363a4e42f6c2859b6bde715a5dd3c6264dab006267e29ac7d1f9"} Jan 31 04:47:43 crc kubenswrapper[4832]: I0131 04:47:43.530176 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"75d2baf41b3a1f1f56842aeebf723410b2fbda9ad40670ffb5478ed27cbb7111"} Jan 31 04:47:44 crc kubenswrapper[4832]: I0131 04:47:44.537326 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Jan 31 04:47:44 crc kubenswrapper[4832]: I0131 04:47:44.537637 4832 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="80431f9f00d62e659fa4ffab3a19bb765b6d0f3cd4e50fd68666ae6798fa0228" exitCode=255 Jan 31 04:47:44 crc kubenswrapper[4832]: I0131 04:47:44.537829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"80431f9f00d62e659fa4ffab3a19bb765b6d0f3cd4e50fd68666ae6798fa0228"} Jan 31 04:47:44 crc kubenswrapper[4832]: I0131 04:47:44.538194 4832 scope.go:117] "RemoveContainer" containerID="80431f9f00d62e659fa4ffab3a19bb765b6d0f3cd4e50fd68666ae6798fa0228" Jan 31 04:47:45 crc kubenswrapper[4832]: I0131 04:47:45.546878 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 31 04:47:45 crc kubenswrapper[4832]: I0131 04:47:45.547813 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Jan 31 04:47:45 crc kubenswrapper[4832]: I0131 04:47:45.547897 4832 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="8c4b66a4f6313ecc054cb15ab198037e728c163429bde17248c32c4c1a801da8" exitCode=255 Jan 31 04:47:45 crc kubenswrapper[4832]: I0131 04:47:45.547956 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"8c4b66a4f6313ecc054cb15ab198037e728c163429bde17248c32c4c1a801da8"} Jan 31 04:47:45 crc kubenswrapper[4832]: I0131 04:47:45.548021 4832 scope.go:117] "RemoveContainer" containerID="80431f9f00d62e659fa4ffab3a19bb765b6d0f3cd4e50fd68666ae6798fa0228" Jan 31 04:47:45 crc kubenswrapper[4832]: I0131 04:47:45.548725 4832 scope.go:117] "RemoveContainer" containerID="8c4b66a4f6313ecc054cb15ab198037e728c163429bde17248c32c4c1a801da8" Jan 31 04:47:45 crc kubenswrapper[4832]: E0131 04:47:45.549061 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:47:46 crc kubenswrapper[4832]: I0131 04:47:46.558127 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 31 04:47:46 crc kubenswrapper[4832]: I0131 04:47:46.558987 4832 scope.go:117] "RemoveContainer" containerID="8c4b66a4f6313ecc054cb15ab198037e728c163429bde17248c32c4c1a801da8" Jan 31 04:47:46 crc kubenswrapper[4832]: E0131 04:47:46.559286 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 04:47:46 crc kubenswrapper[4832]: I0131 04:47:46.610757 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 04:47:48 crc kubenswrapper[4832]: I0131 04:47:48.347770 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 04:47:48 crc kubenswrapper[4832]: I0131 04:47:48.582039 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 04:47:48 crc kubenswrapper[4832]: I0131 04:47:48.643241 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 04:47:48 crc kubenswrapper[4832]: I0131 04:47:48.804918 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 04:47:49 crc kubenswrapper[4832]: I0131 04:47:49.268136 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 04:47:49 crc kubenswrapper[4832]: I0131 04:47:49.480959 4832 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 31 04:47:49 crc kubenswrapper[4832]: I0131 04:47:49.481166 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 31 04:47:50 crc kubenswrapper[4832]: I0131 04:47:50.114989 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 04:47:50 crc kubenswrapper[4832]: I0131 04:47:50.159771 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 04:47:51 crc kubenswrapper[4832]: I0131 04:47:51.127237 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 04:47:51 crc kubenswrapper[4832]: I0131 04:47:51.214983 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 04:47:51 crc kubenswrapper[4832]: I0131 04:47:51.226638 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 04:47:51 crc kubenswrapper[4832]: I0131 04:47:51.347130 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 04:47:51 crc kubenswrapper[4832]: I0131 04:47:51.600823 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 04:47:51 crc kubenswrapper[4832]: I0131 04:47:51.619671 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 04:47:51 crc kubenswrapper[4832]: I0131 04:47:51.741709 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 04:47:51 crc kubenswrapper[4832]: I0131 04:47:51.762943 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 04:47:51 crc kubenswrapper[4832]: I0131 04:47:51.885646 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 04:47:52 crc kubenswrapper[4832]: I0131 04:47:52.064319 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 04:47:52 crc kubenswrapper[4832]: I0131 04:47:52.106275 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 04:47:52 crc kubenswrapper[4832]: I0131 04:47:52.108230 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 04:47:52 crc kubenswrapper[4832]: I0131 04:47:52.108755 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 04:47:52 crc kubenswrapper[4832]: I0131 04:47:52.130360 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 04:47:52 crc kubenswrapper[4832]: I0131 04:47:52.288901 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 04:47:52 crc kubenswrapper[4832]: I0131 04:47:52.338937 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 04:47:52 crc kubenswrapper[4832]: I0131 04:47:52.421934 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 04:47:52 crc kubenswrapper[4832]: I0131 04:47:52.468646 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 04:47:52 crc kubenswrapper[4832]: I0131 04:47:52.473858 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 04:47:52 crc kubenswrapper[4832]: I0131 04:47:52.618548 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 04:47:52 crc kubenswrapper[4832]: I0131 04:47:52.751968 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 04:47:52 crc kubenswrapper[4832]: I0131 04:47:52.851549 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 04:47:52 crc kubenswrapper[4832]: I0131 04:47:52.873965 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 04:47:52 crc kubenswrapper[4832]: I0131 04:47:52.930819 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.003485 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.063259 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.074669 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.125245 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.140055 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.175095 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.229924 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.300533 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.372093 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.410900 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.444602 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.491856 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.566003 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.647519 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.674954 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.710862 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.843724 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 04:47:53 crc kubenswrapper[4832]: I0131 04:47:53.889101 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.000198 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.066115 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.120643 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.152202 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.164612 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.208214 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.300189 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.426868 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.457272 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.648227 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.738229 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.760272 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.874487 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.912205 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.948722 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.972935 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.985318 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 04:47:54 crc kubenswrapper[4832]: I0131 04:47:54.999068 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 04:47:55 crc kubenswrapper[4832]: I0131 04:47:55.293945 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 04:47:55 crc kubenswrapper[4832]: I0131 04:47:55.315671 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 04:47:55 crc kubenswrapper[4832]: I0131 04:47:55.479955 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 04:47:55 crc kubenswrapper[4832]: I0131 04:47:55.526708 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 04:47:55 crc kubenswrapper[4832]: I0131 04:47:55.546479 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 04:47:55 crc kubenswrapper[4832]: I0131 04:47:55.603390 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 04:47:55 crc kubenswrapper[4832]: I0131 04:47:55.612707 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 04:47:55 crc kubenswrapper[4832]: I0131 04:47:55.637235 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 04:47:55 crc kubenswrapper[4832]: I0131 04:47:55.641175 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 04:47:55 crc kubenswrapper[4832]: I0131 04:47:55.682980 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 04:47:55 crc kubenswrapper[4832]: I0131 04:47:55.704691 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 04:47:55 crc kubenswrapper[4832]: I0131 04:47:55.845002 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 04:47:55 crc kubenswrapper[4832]: I0131 04:47:55.863803 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.155444 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.191026 4832 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.222446 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.249381 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.285023 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.307903 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.321694 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.334345 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.576922 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.609417 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.613754 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.706845 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.724855 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.738925 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.827862 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.883730 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.912369 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 04:47:56 crc kubenswrapper[4832]: I0131 04:47:56.968509 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 04:47:57 crc kubenswrapper[4832]: I0131 04:47:57.076409 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 04:47:57 crc kubenswrapper[4832]: I0131 04:47:57.172416 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 04:47:57 crc kubenswrapper[4832]: I0131 04:47:57.623440 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 04:47:57 crc kubenswrapper[4832]: I0131 04:47:57.685429 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 04:47:57 crc kubenswrapper[4832]: I0131 04:47:57.686817 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 04:47:57 crc kubenswrapper[4832]: I0131 04:47:57.718943 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 04:47:57 crc kubenswrapper[4832]: I0131 04:47:57.718959 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 04:47:57 crc kubenswrapper[4832]: I0131 04:47:57.732183 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 04:47:57 crc kubenswrapper[4832]: I0131 04:47:57.758052 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 04:47:57 crc kubenswrapper[4832]: I0131 04:47:57.796654 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 04:47:57 crc kubenswrapper[4832]: I0131 04:47:57.817609 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 04:47:57 crc kubenswrapper[4832]: I0131 04:47:57.922334 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 04:47:57 crc kubenswrapper[4832]: I0131 04:47:57.968604 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 04:47:58 crc kubenswrapper[4832]: I0131 04:47:58.069032 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 04:47:58 crc kubenswrapper[4832]: I0131 04:47:58.181223 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 04:47:58 crc kubenswrapper[4832]: I0131 04:47:58.186611 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 04:47:58 crc kubenswrapper[4832]: I0131 04:47:58.326290 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 04:47:58 crc kubenswrapper[4832]: I0131 04:47:58.332513 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 04:47:58 crc kubenswrapper[4832]: I0131 04:47:58.352313 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 04:47:58 crc kubenswrapper[4832]: I0131 04:47:58.425408 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 04:47:58 crc kubenswrapper[4832]: I0131 04:47:58.537940 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 04:47:58 crc kubenswrapper[4832]: I0131 04:47:58.703930 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 04:47:58 crc kubenswrapper[4832]: I0131 04:47:58.776688 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 04:47:58 crc kubenswrapper[4832]: I0131 04:47:58.821955 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 04:47:58 crc kubenswrapper[4832]: I0131 04:47:58.828534 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 04:47:58 crc kubenswrapper[4832]: I0131 04:47:58.830290 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.082821 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.100358 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.155761 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.160860 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.205063 4832 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.222066 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.299524 4832 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.432617 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.433199 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.443874 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.465078 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.483375 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.487636 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.523050 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.557448 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.566369 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.578889 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.679331 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.679927 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.687955 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.699112 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.765992 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.782757 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.785775 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.832864 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.897057 4832 scope.go:117] "RemoveContainer" containerID="8c4b66a4f6313ecc054cb15ab198037e728c163429bde17248c32c4c1a801da8" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.912309 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.935582 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 04:47:59 crc kubenswrapper[4832]: I0131 04:47:59.986852 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.033757 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.058215 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.079273 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.123270 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.134105 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.140831 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.145880 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.153051 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.178288 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.251549 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.308109 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.351919 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.351946 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.352244 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.423999 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.441436 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.491948 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.614775 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.666221 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.666276 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"41fab30c26649aef4bd95e6e746becaa837dd611839ebe2037770f520c512862"} Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.677396 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.687759 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.714209 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.752204 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.835588 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.840448 4832 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.957299 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.980811 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 04:48:00 crc kubenswrapper[4832]: I0131 04:48:00.999513 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.086614 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.142681 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.181283 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.220894 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.248354 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.280480 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.473735 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.539443 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.592342 4832 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.598734 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.598813 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.598988 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.605009 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.635489 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.645037 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.645011292 podStartE2EDuration="23.645011292s" podCreationTimestamp="2026-01-31 04:47:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:48:01.617115461 +0000 UTC m=+290.565937186" watchObservedRunningTime="2026-01-31 04:48:01.645011292 +0000 UTC m=+290.593832977" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.655707 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.675294 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.790307 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.794315 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.841919 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.859504 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 04:48:01 crc kubenswrapper[4832]: I0131 04:48:01.934306 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.030875 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.194782 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.281263 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.287550 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.327064 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.328373 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.451677 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.474490 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.547401 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.552911 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.602338 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.610849 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.636485 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.756698 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.804910 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.805918 4832 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.889372 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 04:48:02 crc kubenswrapper[4832]: I0131 04:48:02.936403 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 04:48:03 crc kubenswrapper[4832]: I0131 04:48:03.011084 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 04:48:03 crc kubenswrapper[4832]: I0131 04:48:03.046517 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 04:48:03 crc kubenswrapper[4832]: I0131 04:48:03.050113 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 04:48:03 crc kubenswrapper[4832]: I0131 04:48:03.151701 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 04:48:03 crc kubenswrapper[4832]: I0131 04:48:03.249728 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 04:48:03 crc kubenswrapper[4832]: I0131 04:48:03.404503 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 04:48:03 crc kubenswrapper[4832]: I0131 04:48:03.483168 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 04:48:03 crc kubenswrapper[4832]: I0131 04:48:03.496840 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 04:48:03 crc kubenswrapper[4832]: I0131 04:48:03.833215 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 04:48:03 crc kubenswrapper[4832]: I0131 04:48:03.872347 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 04:48:03 crc kubenswrapper[4832]: I0131 04:48:03.905752 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 04:48:03 crc kubenswrapper[4832]: I0131 04:48:03.954114 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 04:48:03 crc kubenswrapper[4832]: I0131 04:48:03.971034 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 04:48:03 crc kubenswrapper[4832]: I0131 04:48:03.984873 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 04:48:03 crc kubenswrapper[4832]: I0131 04:48:03.997844 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 04:48:04 crc kubenswrapper[4832]: I0131 04:48:04.158712 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 04:48:04 crc kubenswrapper[4832]: I0131 04:48:04.263175 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 04:48:04 crc kubenswrapper[4832]: I0131 04:48:04.270854 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 04:48:04 crc kubenswrapper[4832]: I0131 04:48:04.460265 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 04:48:04 crc kubenswrapper[4832]: I0131 04:48:04.482527 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 04:48:04 crc kubenswrapper[4832]: I0131 04:48:04.925447 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 04:48:05 crc kubenswrapper[4832]: I0131 04:48:05.106530 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 04:48:05 crc kubenswrapper[4832]: I0131 04:48:05.313579 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 04:48:05 crc kubenswrapper[4832]: I0131 04:48:05.351824 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 04:48:05 crc kubenswrapper[4832]: I0131 04:48:05.401305 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 04:48:05 crc kubenswrapper[4832]: I0131 04:48:05.759509 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 04:48:05 crc kubenswrapper[4832]: I0131 04:48:05.969316 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 04:48:06 crc kubenswrapper[4832]: I0131 04:48:06.250911 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 04:48:06 crc kubenswrapper[4832]: I0131 04:48:06.581114 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 04:48:06 crc kubenswrapper[4832]: I0131 04:48:06.632885 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 04:48:06 crc kubenswrapper[4832]: I0131 04:48:06.681076 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 04:48:07 crc kubenswrapper[4832]: I0131 04:48:07.010896 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 04:48:07 crc kubenswrapper[4832]: I0131 04:48:07.128287 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 04:48:08 crc kubenswrapper[4832]: I0131 04:48:08.673869 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 04:48:11 crc kubenswrapper[4832]: I0131 04:48:11.613228 4832 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 31 04:48:12 crc kubenswrapper[4832]: I0131 04:48:12.189705 4832 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 04:48:12 crc kubenswrapper[4832]: I0131 04:48:12.190045 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://0f0216426da5b3b19e50e04a84dd2f35b28d26553a5c134ffa538e4a070cc49d" gracePeriod=5 Jan 31 04:48:12 crc kubenswrapper[4832]: I0131 04:48:12.392037 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.771553 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.772215 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.776932 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.776984 4832 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="0f0216426da5b3b19e50e04a84dd2f35b28d26553a5c134ffa538e4a070cc49d" exitCode=137 Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.777032 4832 scope.go:117] "RemoveContainer" containerID="0f0216426da5b3b19e50e04a84dd2f35b28d26553a5c134ffa538e4a070cc49d" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.777149 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.797798 4832 scope.go:117] "RemoveContainer" containerID="0f0216426da5b3b19e50e04a84dd2f35b28d26553a5c134ffa538e4a070cc49d" Jan 31 04:48:17 crc kubenswrapper[4832]: E0131 04:48:17.798497 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f0216426da5b3b19e50e04a84dd2f35b28d26553a5c134ffa538e4a070cc49d\": container with ID starting with 0f0216426da5b3b19e50e04a84dd2f35b28d26553a5c134ffa538e4a070cc49d not found: ID does not exist" containerID="0f0216426da5b3b19e50e04a84dd2f35b28d26553a5c134ffa538e4a070cc49d" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.798529 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0216426da5b3b19e50e04a84dd2f35b28d26553a5c134ffa538e4a070cc49d"} err="failed to get container status \"0f0216426da5b3b19e50e04a84dd2f35b28d26553a5c134ffa538e4a070cc49d\": rpc error: code = NotFound desc = could not find container \"0f0216426da5b3b19e50e04a84dd2f35b28d26553a5c134ffa538e4a070cc49d\": container with ID starting with 0f0216426da5b3b19e50e04a84dd2f35b28d26553a5c134ffa538e4a070cc49d not found: ID does not exist" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.875623 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.875766 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.875787 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.875824 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.875833 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.875907 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.875956 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.876019 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.876074 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.876353 4832 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.876372 4832 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.876387 4832 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.876398 4832 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.886507 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:48:17 crc kubenswrapper[4832]: I0131 04:48:17.977952 4832 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:19 crc kubenswrapper[4832]: I0131 04:48:19.868082 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 31 04:48:22 crc kubenswrapper[4832]: I0131 04:48:22.818514 4832 generic.go:334] "Generic (PLEG): container finished" podID="305e29e9-933c-4098-a650-7d06eacb2ed6" containerID="321dd495da81547de5f9bf345f631af09d381f70394f2a266f352173ffa10968" exitCode=0 Jan 31 04:48:22 crc kubenswrapper[4832]: I0131 04:48:22.818630 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" event={"ID":"305e29e9-933c-4098-a650-7d06eacb2ed6","Type":"ContainerDied","Data":"321dd495da81547de5f9bf345f631af09d381f70394f2a266f352173ffa10968"} Jan 31 04:48:22 crc kubenswrapper[4832]: I0131 04:48:22.819624 4832 scope.go:117] "RemoveContainer" containerID="321dd495da81547de5f9bf345f631af09d381f70394f2a266f352173ffa10968" Jan 31 04:48:23 crc kubenswrapper[4832]: I0131 04:48:23.827119 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" event={"ID":"305e29e9-933c-4098-a650-7d06eacb2ed6","Type":"ContainerStarted","Data":"46ff08af552e365a1e990234233dd82faf7d5211370aca4d6d64399bb93c4038"} Jan 31 04:48:23 crc kubenswrapper[4832]: I0131 04:48:23.827822 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:48:23 crc kubenswrapper[4832]: I0131 04:48:23.828949 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:48:28 crc kubenswrapper[4832]: I0131 04:48:28.869941 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fkwvm"] Jan 31 04:48:28 crc kubenswrapper[4832]: I0131 04:48:28.872152 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" podUID="d08c2681-edcf-4634-aede-63eb081e72a0" containerName="controller-manager" containerID="cri-o://e7aeb86633b5eb84209e5a351e0832884104ca4cd8faa265cc5b3c56c81d5d1c" gracePeriod=30 Jan 31 04:48:28 crc kubenswrapper[4832]: I0131 04:48:28.933391 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm"] Jan 31 04:48:28 crc kubenswrapper[4832]: I0131 04:48:28.933964 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" podUID="944af4a5-8c80-4d24-8f2e-ead3cf864aa9" containerName="route-controller-manager" containerID="cri-o://776f4b831e28732e19ef850624999c2e20e694317e6a0d0328d2570e41e04d49" gracePeriod=30 Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.286539 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.345525 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.352819 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-client-ca\") pod \"d08c2681-edcf-4634-aede-63eb081e72a0\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.352899 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-proxy-ca-bundles\") pod \"d08c2681-edcf-4634-aede-63eb081e72a0\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.352960 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d08c2681-edcf-4634-aede-63eb081e72a0-serving-cert\") pod \"d08c2681-edcf-4634-aede-63eb081e72a0\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.352994 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-config\") pod \"d08c2681-edcf-4634-aede-63eb081e72a0\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.353098 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q55c\" (UniqueName: \"kubernetes.io/projected/d08c2681-edcf-4634-aede-63eb081e72a0-kube-api-access-7q55c\") pod \"d08c2681-edcf-4634-aede-63eb081e72a0\" (UID: \"d08c2681-edcf-4634-aede-63eb081e72a0\") " Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.355028 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-config" (OuterVolumeSpecName: "config") pod "d08c2681-edcf-4634-aede-63eb081e72a0" (UID: "d08c2681-edcf-4634-aede-63eb081e72a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.355100 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-client-ca" (OuterVolumeSpecName: "client-ca") pod "d08c2681-edcf-4634-aede-63eb081e72a0" (UID: "d08c2681-edcf-4634-aede-63eb081e72a0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.357365 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d08c2681-edcf-4634-aede-63eb081e72a0" (UID: "d08c2681-edcf-4634-aede-63eb081e72a0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.365012 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d08c2681-edcf-4634-aede-63eb081e72a0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d08c2681-edcf-4634-aede-63eb081e72a0" (UID: "d08c2681-edcf-4634-aede-63eb081e72a0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.363684 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d08c2681-edcf-4634-aede-63eb081e72a0-kube-api-access-7q55c" (OuterVolumeSpecName: "kube-api-access-7q55c") pod "d08c2681-edcf-4634-aede-63eb081e72a0" (UID: "d08c2681-edcf-4634-aede-63eb081e72a0"). InnerVolumeSpecName "kube-api-access-7q55c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.454347 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-serving-cert\") pod \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.454448 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-client-ca\") pod \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.454508 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-config\") pod \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.454550 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhhh9\" (UniqueName: \"kubernetes.io/projected/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-kube-api-access-jhhh9\") pod \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\" (UID: \"944af4a5-8c80-4d24-8f2e-ead3cf864aa9\") " Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.454891 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.454908 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q55c\" (UniqueName: \"kubernetes.io/projected/d08c2681-edcf-4634-aede-63eb081e72a0-kube-api-access-7q55c\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.454922 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.454933 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d08c2681-edcf-4634-aede-63eb081e72a0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.454943 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d08c2681-edcf-4634-aede-63eb081e72a0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.455402 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-client-ca" (OuterVolumeSpecName: "client-ca") pod "944af4a5-8c80-4d24-8f2e-ead3cf864aa9" (UID: "944af4a5-8c80-4d24-8f2e-ead3cf864aa9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.455886 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-config" (OuterVolumeSpecName: "config") pod "944af4a5-8c80-4d24-8f2e-ead3cf864aa9" (UID: "944af4a5-8c80-4d24-8f2e-ead3cf864aa9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.458136 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-kube-api-access-jhhh9" (OuterVolumeSpecName: "kube-api-access-jhhh9") pod "944af4a5-8c80-4d24-8f2e-ead3cf864aa9" (UID: "944af4a5-8c80-4d24-8f2e-ead3cf864aa9"). InnerVolumeSpecName "kube-api-access-jhhh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.460686 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "944af4a5-8c80-4d24-8f2e-ead3cf864aa9" (UID: "944af4a5-8c80-4d24-8f2e-ead3cf864aa9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.558415 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.558470 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.558483 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.558501 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhhh9\" (UniqueName: \"kubernetes.io/projected/944af4a5-8c80-4d24-8f2e-ead3cf864aa9-kube-api-access-jhhh9\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.892426 4832 generic.go:334] "Generic (PLEG): container finished" podID="d08c2681-edcf-4634-aede-63eb081e72a0" containerID="e7aeb86633b5eb84209e5a351e0832884104ca4cd8faa265cc5b3c56c81d5d1c" exitCode=0 Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.892779 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.892768 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" event={"ID":"d08c2681-edcf-4634-aede-63eb081e72a0","Type":"ContainerDied","Data":"e7aeb86633b5eb84209e5a351e0832884104ca4cd8faa265cc5b3c56c81d5d1c"} Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.893363 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwvm" event={"ID":"d08c2681-edcf-4634-aede-63eb081e72a0","Type":"ContainerDied","Data":"55a04650b11c780f01bd2e0875f1a64d7c5ac6ed67ba70b43d9309316348f123"} Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.893431 4832 scope.go:117] "RemoveContainer" containerID="e7aeb86633b5eb84209e5a351e0832884104ca4cd8faa265cc5b3c56c81d5d1c" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.896633 4832 generic.go:334] "Generic (PLEG): container finished" podID="944af4a5-8c80-4d24-8f2e-ead3cf864aa9" containerID="776f4b831e28732e19ef850624999c2e20e694317e6a0d0328d2570e41e04d49" exitCode=0 Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.896692 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" event={"ID":"944af4a5-8c80-4d24-8f2e-ead3cf864aa9","Type":"ContainerDied","Data":"776f4b831e28732e19ef850624999c2e20e694317e6a0d0328d2570e41e04d49"} Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.896729 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" event={"ID":"944af4a5-8c80-4d24-8f2e-ead3cf864aa9","Type":"ContainerDied","Data":"c69aa58a1545171ad4dd9fd9f9b8d2d927cb7e301aa1be3b22c12be1eadf2cc8"} Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.896822 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.917815 4832 scope.go:117] "RemoveContainer" containerID="e7aeb86633b5eb84209e5a351e0832884104ca4cd8faa265cc5b3c56c81d5d1c" Jan 31 04:48:29 crc kubenswrapper[4832]: E0131 04:48:29.919990 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7aeb86633b5eb84209e5a351e0832884104ca4cd8faa265cc5b3c56c81d5d1c\": container with ID starting with e7aeb86633b5eb84209e5a351e0832884104ca4cd8faa265cc5b3c56c81d5d1c not found: ID does not exist" containerID="e7aeb86633b5eb84209e5a351e0832884104ca4cd8faa265cc5b3c56c81d5d1c" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.920097 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7aeb86633b5eb84209e5a351e0832884104ca4cd8faa265cc5b3c56c81d5d1c"} err="failed to get container status \"e7aeb86633b5eb84209e5a351e0832884104ca4cd8faa265cc5b3c56c81d5d1c\": rpc error: code = NotFound desc = could not find container \"e7aeb86633b5eb84209e5a351e0832884104ca4cd8faa265cc5b3c56c81d5d1c\": container with ID starting with e7aeb86633b5eb84209e5a351e0832884104ca4cd8faa265cc5b3c56c81d5d1c not found: ID does not exist" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.920149 4832 scope.go:117] "RemoveContainer" containerID="776f4b831e28732e19ef850624999c2e20e694317e6a0d0328d2570e41e04d49" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.926792 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fkwvm"] Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.938018 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fkwvm"] Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.938701 4832 scope.go:117] "RemoveContainer" containerID="776f4b831e28732e19ef850624999c2e20e694317e6a0d0328d2570e41e04d49" Jan 31 04:48:29 crc kubenswrapper[4832]: E0131 04:48:29.939384 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"776f4b831e28732e19ef850624999c2e20e694317e6a0d0328d2570e41e04d49\": container with ID starting with 776f4b831e28732e19ef850624999c2e20e694317e6a0d0328d2570e41e04d49 not found: ID does not exist" containerID="776f4b831e28732e19ef850624999c2e20e694317e6a0d0328d2570e41e04d49" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.939435 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"776f4b831e28732e19ef850624999c2e20e694317e6a0d0328d2570e41e04d49"} err="failed to get container status \"776f4b831e28732e19ef850624999c2e20e694317e6a0d0328d2570e41e04d49\": rpc error: code = NotFound desc = could not find container \"776f4b831e28732e19ef850624999c2e20e694317e6a0d0328d2570e41e04d49\": container with ID starting with 776f4b831e28732e19ef850624999c2e20e694317e6a0d0328d2570e41e04d49 not found: ID does not exist" Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.952159 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm"] Jan 31 04:48:29 crc kubenswrapper[4832]: I0131 04:48:29.956146 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k4lgm"] Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.608160 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d65487979-wgcrr"] Jan 31 04:48:30 crc kubenswrapper[4832]: E0131 04:48:30.608465 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d08c2681-edcf-4634-aede-63eb081e72a0" containerName="controller-manager" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.608485 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d08c2681-edcf-4634-aede-63eb081e72a0" containerName="controller-manager" Jan 31 04:48:30 crc kubenswrapper[4832]: E0131 04:48:30.608507 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df36146-a4dc-4f8d-830a-aebc8933d8af" containerName="installer" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.608516 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df36146-a4dc-4f8d-830a-aebc8933d8af" containerName="installer" Jan 31 04:48:30 crc kubenswrapper[4832]: E0131 04:48:30.608528 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.608537 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 04:48:30 crc kubenswrapper[4832]: E0131 04:48:30.608548 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944af4a5-8c80-4d24-8f2e-ead3cf864aa9" containerName="route-controller-manager" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.608578 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="944af4a5-8c80-4d24-8f2e-ead3cf864aa9" containerName="route-controller-manager" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.608696 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.608712 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df36146-a4dc-4f8d-830a-aebc8933d8af" containerName="installer" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.608728 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="944af4a5-8c80-4d24-8f2e-ead3cf864aa9" containerName="route-controller-manager" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.608742 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d08c2681-edcf-4634-aede-63eb081e72a0" containerName="controller-manager" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.609240 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.612791 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.613140 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.613380 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.613910 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.614216 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.614981 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.619304 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.627302 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6"] Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.631935 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.635428 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.635779 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.635838 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.635962 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.636027 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.637379 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d65487979-wgcrr"] Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.638156 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.657542 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6"] Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.673625 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-proxy-ca-bundles\") pod \"controller-manager-5d65487979-wgcrr\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.673732 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caa521aa-03d6-4557-b59a-6bd7bf013448-serving-cert\") pod \"controller-manager-5d65487979-wgcrr\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.673762 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-config\") pod \"controller-manager-5d65487979-wgcrr\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.673803 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-client-ca\") pod \"controller-manager-5d65487979-wgcrr\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.673831 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2wwz\" (UniqueName: \"kubernetes.io/projected/caa521aa-03d6-4557-b59a-6bd7bf013448-kube-api-access-c2wwz\") pod \"controller-manager-5d65487979-wgcrr\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.775840 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-client-ca\") pod \"route-controller-manager-645d5c8f55-jmlg6\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.775916 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caa521aa-03d6-4557-b59a-6bd7bf013448-serving-cert\") pod \"controller-manager-5d65487979-wgcrr\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.775946 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-config\") pod \"controller-manager-5d65487979-wgcrr\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.775996 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-client-ca\") pod \"controller-manager-5d65487979-wgcrr\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.776025 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp42q\" (UniqueName: \"kubernetes.io/projected/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-kube-api-access-lp42q\") pod \"route-controller-manager-645d5c8f55-jmlg6\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.776056 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2wwz\" (UniqueName: \"kubernetes.io/projected/caa521aa-03d6-4557-b59a-6bd7bf013448-kube-api-access-c2wwz\") pod \"controller-manager-5d65487979-wgcrr\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.776090 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-proxy-ca-bundles\") pod \"controller-manager-5d65487979-wgcrr\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.776118 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-serving-cert\") pod \"route-controller-manager-645d5c8f55-jmlg6\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.776156 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-config\") pod \"route-controller-manager-645d5c8f55-jmlg6\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.777498 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-proxy-ca-bundles\") pod \"controller-manager-5d65487979-wgcrr\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.777701 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-config\") pod \"controller-manager-5d65487979-wgcrr\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.778093 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-client-ca\") pod \"controller-manager-5d65487979-wgcrr\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.781343 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caa521aa-03d6-4557-b59a-6bd7bf013448-serving-cert\") pod \"controller-manager-5d65487979-wgcrr\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.802353 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2wwz\" (UniqueName: \"kubernetes.io/projected/caa521aa-03d6-4557-b59a-6bd7bf013448-kube-api-access-c2wwz\") pod \"controller-manager-5d65487979-wgcrr\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.877766 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp42q\" (UniqueName: \"kubernetes.io/projected/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-kube-api-access-lp42q\") pod \"route-controller-manager-645d5c8f55-jmlg6\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.877891 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-serving-cert\") pod \"route-controller-manager-645d5c8f55-jmlg6\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.878583 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-config\") pod \"route-controller-manager-645d5c8f55-jmlg6\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.878636 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-client-ca\") pod \"route-controller-manager-645d5c8f55-jmlg6\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.879774 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-client-ca\") pod \"route-controller-manager-645d5c8f55-jmlg6\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.879887 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-config\") pod \"route-controller-manager-645d5c8f55-jmlg6\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.882797 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-serving-cert\") pod \"route-controller-manager-645d5c8f55-jmlg6\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.895802 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp42q\" (UniqueName: \"kubernetes.io/projected/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-kube-api-access-lp42q\") pod \"route-controller-manager-645d5c8f55-jmlg6\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.935791 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:30 crc kubenswrapper[4832]: I0131 04:48:30.953329 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:31 crc kubenswrapper[4832]: I0131 04:48:31.205435 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6"] Jan 31 04:48:31 crc kubenswrapper[4832]: I0131 04:48:31.378123 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d65487979-wgcrr"] Jan 31 04:48:31 crc kubenswrapper[4832]: I0131 04:48:31.867910 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="944af4a5-8c80-4d24-8f2e-ead3cf864aa9" path="/var/lib/kubelet/pods/944af4a5-8c80-4d24-8f2e-ead3cf864aa9/volumes" Jan 31 04:48:31 crc kubenswrapper[4832]: I0131 04:48:31.868638 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d08c2681-edcf-4634-aede-63eb081e72a0" path="/var/lib/kubelet/pods/d08c2681-edcf-4634-aede-63eb081e72a0/volumes" Jan 31 04:48:31 crc kubenswrapper[4832]: I0131 04:48:31.911736 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" event={"ID":"caa521aa-03d6-4557-b59a-6bd7bf013448","Type":"ContainerStarted","Data":"8cf44ad583c3e4273a5e22b5044502eda3d832760a952702d90105c70c4937dd"} Jan 31 04:48:31 crc kubenswrapper[4832]: I0131 04:48:31.911783 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" event={"ID":"caa521aa-03d6-4557-b59a-6bd7bf013448","Type":"ContainerStarted","Data":"52ecd9d3425f27db10f2384c90af25b5857f7d7c75908c1847aa1350caf1d303"} Jan 31 04:48:31 crc kubenswrapper[4832]: I0131 04:48:31.913748 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:31 crc kubenswrapper[4832]: I0131 04:48:31.916462 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" event={"ID":"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b","Type":"ContainerStarted","Data":"a85b642e3359b602d6b1bf12274b335968b376a51be63471b6735405db9e24d5"} Jan 31 04:48:31 crc kubenswrapper[4832]: I0131 04:48:31.916513 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" event={"ID":"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b","Type":"ContainerStarted","Data":"b31156e1722b72a2baf1da6e45eb99c6424663bf931daaf1e6f56f4d1cb88f1a"} Jan 31 04:48:31 crc kubenswrapper[4832]: I0131 04:48:31.916708 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:31 crc kubenswrapper[4832]: I0131 04:48:31.921178 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:31 crc kubenswrapper[4832]: I0131 04:48:31.925624 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:31 crc kubenswrapper[4832]: I0131 04:48:31.931613 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" podStartSLOduration=1.9315960620000001 podStartE2EDuration="1.931596062s" podCreationTimestamp="2026-01-31 04:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:48:31.928822476 +0000 UTC m=+320.877644171" watchObservedRunningTime="2026-01-31 04:48:31.931596062 +0000 UTC m=+320.880417747" Jan 31 04:48:31 crc kubenswrapper[4832]: I0131 04:48:31.977302 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" podStartSLOduration=1.97728499 podStartE2EDuration="1.97728499s" podCreationTimestamp="2026-01-31 04:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:48:31.947173399 +0000 UTC m=+320.895995084" watchObservedRunningTime="2026-01-31 04:48:31.97728499 +0000 UTC m=+320.926106675" Jan 31 04:48:42 crc kubenswrapper[4832]: I0131 04:48:42.846911 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zjw2c"] Jan 31 04:48:42 crc kubenswrapper[4832]: I0131 04:48:42.848451 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:42 crc kubenswrapper[4832]: I0131 04:48:42.866139 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zjw2c"] Jan 31 04:48:42 crc kubenswrapper[4832]: I0131 04:48:42.943763 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/01edbe54-1531-48d1-9807-bf358f525550-registry-tls\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:42 crc kubenswrapper[4832]: I0131 04:48:42.943808 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/01edbe54-1531-48d1-9807-bf358f525550-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:42 crc kubenswrapper[4832]: I0131 04:48:42.943843 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:42 crc kubenswrapper[4832]: I0131 04:48:42.943882 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/01edbe54-1531-48d1-9807-bf358f525550-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:42 crc kubenswrapper[4832]: I0131 04:48:42.943910 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcfdb\" (UniqueName: \"kubernetes.io/projected/01edbe54-1531-48d1-9807-bf358f525550-kube-api-access-bcfdb\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:42 crc kubenswrapper[4832]: I0131 04:48:42.943936 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01edbe54-1531-48d1-9807-bf358f525550-bound-sa-token\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:42 crc kubenswrapper[4832]: I0131 04:48:42.943969 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/01edbe54-1531-48d1-9807-bf358f525550-registry-certificates\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:42 crc kubenswrapper[4832]: I0131 04:48:42.944000 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01edbe54-1531-48d1-9807-bf358f525550-trusted-ca\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:42 crc kubenswrapper[4832]: I0131 04:48:42.975029 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.045037 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01edbe54-1531-48d1-9807-bf358f525550-bound-sa-token\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.045111 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/01edbe54-1531-48d1-9807-bf358f525550-registry-certificates\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.045149 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01edbe54-1531-48d1-9807-bf358f525550-trusted-ca\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.045201 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/01edbe54-1531-48d1-9807-bf358f525550-registry-tls\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.045223 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/01edbe54-1531-48d1-9807-bf358f525550-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.045265 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/01edbe54-1531-48d1-9807-bf358f525550-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.045296 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcfdb\" (UniqueName: \"kubernetes.io/projected/01edbe54-1531-48d1-9807-bf358f525550-kube-api-access-bcfdb\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.046922 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/01edbe54-1531-48d1-9807-bf358f525550-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.047259 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/01edbe54-1531-48d1-9807-bf358f525550-registry-certificates\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.048108 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01edbe54-1531-48d1-9807-bf358f525550-trusted-ca\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.055432 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/01edbe54-1531-48d1-9807-bf358f525550-registry-tls\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.057269 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/01edbe54-1531-48d1-9807-bf358f525550-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.066218 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcfdb\" (UniqueName: \"kubernetes.io/projected/01edbe54-1531-48d1-9807-bf358f525550-kube-api-access-bcfdb\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.069147 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01edbe54-1531-48d1-9807-bf358f525550-bound-sa-token\") pod \"image-registry-66df7c8f76-zjw2c\" (UID: \"01edbe54-1531-48d1-9807-bf358f525550\") " pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.168511 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.588860 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zjw2c"] Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.987373 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" event={"ID":"01edbe54-1531-48d1-9807-bf358f525550","Type":"ContainerStarted","Data":"e6fc708bd7f185cf33c6e75ecac81f6935d0ac1b58fc53452ed08914d74be939"} Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.989345 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" event={"ID":"01edbe54-1531-48d1-9807-bf358f525550","Type":"ContainerStarted","Data":"20b83d297e69c49f9cbb28d4d304928e69d6698393b44ea350c4a046a0c1e62a"} Jan 31 04:48:43 crc kubenswrapper[4832]: I0131 04:48:43.989471 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:48:44 crc kubenswrapper[4832]: I0131 04:48:44.012977 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" podStartSLOduration=2.012953537 podStartE2EDuration="2.012953537s" podCreationTimestamp="2026-01-31 04:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:48:44.010601003 +0000 UTC m=+332.959422688" watchObservedRunningTime="2026-01-31 04:48:44.012953537 +0000 UTC m=+332.961775222" Jan 31 04:48:48 crc kubenswrapper[4832]: I0131 04:48:48.539934 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:48:48 crc kubenswrapper[4832]: I0131 04:48:48.540413 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:48:48 crc kubenswrapper[4832]: I0131 04:48:48.788211 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d65487979-wgcrr"] Jan 31 04:48:48 crc kubenswrapper[4832]: I0131 04:48:48.788762 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" podUID="caa521aa-03d6-4557-b59a-6bd7bf013448" containerName="controller-manager" containerID="cri-o://8cf44ad583c3e4273a5e22b5044502eda3d832760a952702d90105c70c4937dd" gracePeriod=30 Jan 31 04:48:48 crc kubenswrapper[4832]: I0131 04:48:48.807173 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6"] Jan 31 04:48:48 crc kubenswrapper[4832]: I0131 04:48:48.807531 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" podUID="a2e27a4e-76a8-41f9-be9e-74301f1a5f3b" containerName="route-controller-manager" containerID="cri-o://a85b642e3359b602d6b1bf12274b335968b376a51be63471b6735405db9e24d5" gracePeriod=30 Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.022638 4832 generic.go:334] "Generic (PLEG): container finished" podID="caa521aa-03d6-4557-b59a-6bd7bf013448" containerID="8cf44ad583c3e4273a5e22b5044502eda3d832760a952702d90105c70c4937dd" exitCode=0 Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.022712 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" event={"ID":"caa521aa-03d6-4557-b59a-6bd7bf013448","Type":"ContainerDied","Data":"8cf44ad583c3e4273a5e22b5044502eda3d832760a952702d90105c70c4937dd"} Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.025484 4832 generic.go:334] "Generic (PLEG): container finished" podID="a2e27a4e-76a8-41f9-be9e-74301f1a5f3b" containerID="a85b642e3359b602d6b1bf12274b335968b376a51be63471b6735405db9e24d5" exitCode=0 Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.025550 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" event={"ID":"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b","Type":"ContainerDied","Data":"a85b642e3359b602d6b1bf12274b335968b376a51be63471b6735405db9e24d5"} Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.345779 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.430209 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.442733 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-config\") pod \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.442810 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-serving-cert\") pod \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.442933 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-client-ca\") pod \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.442976 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp42q\" (UniqueName: \"kubernetes.io/projected/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-kube-api-access-lp42q\") pod \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\" (UID: \"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b\") " Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.443744 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-config" (OuterVolumeSpecName: "config") pod "a2e27a4e-76a8-41f9-be9e-74301f1a5f3b" (UID: "a2e27a4e-76a8-41f9-be9e-74301f1a5f3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.444681 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-client-ca" (OuterVolumeSpecName: "client-ca") pod "a2e27a4e-76a8-41f9-be9e-74301f1a5f3b" (UID: "a2e27a4e-76a8-41f9-be9e-74301f1a5f3b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.449496 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-kube-api-access-lp42q" (OuterVolumeSpecName: "kube-api-access-lp42q") pod "a2e27a4e-76a8-41f9-be9e-74301f1a5f3b" (UID: "a2e27a4e-76a8-41f9-be9e-74301f1a5f3b"). InnerVolumeSpecName "kube-api-access-lp42q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.449751 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a2e27a4e-76a8-41f9-be9e-74301f1a5f3b" (UID: "a2e27a4e-76a8-41f9-be9e-74301f1a5f3b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.544829 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-proxy-ca-bundles\") pod \"caa521aa-03d6-4557-b59a-6bd7bf013448\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.544906 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2wwz\" (UniqueName: \"kubernetes.io/projected/caa521aa-03d6-4557-b59a-6bd7bf013448-kube-api-access-c2wwz\") pod \"caa521aa-03d6-4557-b59a-6bd7bf013448\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.545005 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-config\") pod \"caa521aa-03d6-4557-b59a-6bd7bf013448\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.545050 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-client-ca\") pod \"caa521aa-03d6-4557-b59a-6bd7bf013448\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.545085 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caa521aa-03d6-4557-b59a-6bd7bf013448-serving-cert\") pod \"caa521aa-03d6-4557-b59a-6bd7bf013448\" (UID: \"caa521aa-03d6-4557-b59a-6bd7bf013448\") " Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.545317 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp42q\" (UniqueName: \"kubernetes.io/projected/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-kube-api-access-lp42q\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.545353 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.545363 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.545371 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.546454 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "caa521aa-03d6-4557-b59a-6bd7bf013448" (UID: "caa521aa-03d6-4557-b59a-6bd7bf013448"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.546475 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-config" (OuterVolumeSpecName: "config") pod "caa521aa-03d6-4557-b59a-6bd7bf013448" (UID: "caa521aa-03d6-4557-b59a-6bd7bf013448"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.546807 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-client-ca" (OuterVolumeSpecName: "client-ca") pod "caa521aa-03d6-4557-b59a-6bd7bf013448" (UID: "caa521aa-03d6-4557-b59a-6bd7bf013448"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.548327 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caa521aa-03d6-4557-b59a-6bd7bf013448-kube-api-access-c2wwz" (OuterVolumeSpecName: "kube-api-access-c2wwz") pod "caa521aa-03d6-4557-b59a-6bd7bf013448" (UID: "caa521aa-03d6-4557-b59a-6bd7bf013448"). InnerVolumeSpecName "kube-api-access-c2wwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.548437 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa521aa-03d6-4557-b59a-6bd7bf013448-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "caa521aa-03d6-4557-b59a-6bd7bf013448" (UID: "caa521aa-03d6-4557-b59a-6bd7bf013448"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.646901 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.646935 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.646946 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caa521aa-03d6-4557-b59a-6bd7bf013448-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.646955 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/caa521aa-03d6-4557-b59a-6bd7bf013448-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:49 crc kubenswrapper[4832]: I0131 04:48:49.646965 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2wwz\" (UniqueName: \"kubernetes.io/projected/caa521aa-03d6-4557-b59a-6bd7bf013448-kube-api-access-c2wwz\") on node \"crc\" DevicePath \"\"" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.034175 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.034200 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d65487979-wgcrr" event={"ID":"caa521aa-03d6-4557-b59a-6bd7bf013448","Type":"ContainerDied","Data":"52ecd9d3425f27db10f2384c90af25b5857f7d7c75908c1847aa1350caf1d303"} Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.034268 4832 scope.go:117] "RemoveContainer" containerID="8cf44ad583c3e4273a5e22b5044502eda3d832760a952702d90105c70c4937dd" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.037333 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" event={"ID":"a2e27a4e-76a8-41f9-be9e-74301f1a5f3b","Type":"ContainerDied","Data":"b31156e1722b72a2baf1da6e45eb99c6424663bf931daaf1e6f56f4d1cb88f1a"} Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.037407 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.065906 4832 scope.go:117] "RemoveContainer" containerID="a85b642e3359b602d6b1bf12274b335968b376a51be63471b6735405db9e24d5" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.072488 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d65487979-wgcrr"] Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.081637 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d65487979-wgcrr"] Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.094934 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6"] Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.099937 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-645d5c8f55-jmlg6"] Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.715109 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6498c588b7-hwq5t"] Jan 31 04:48:50 crc kubenswrapper[4832]: E0131 04:48:50.715509 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e27a4e-76a8-41f9-be9e-74301f1a5f3b" containerName="route-controller-manager" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.715530 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e27a4e-76a8-41f9-be9e-74301f1a5f3b" containerName="route-controller-manager" Jan 31 04:48:50 crc kubenswrapper[4832]: E0131 04:48:50.715550 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa521aa-03d6-4557-b59a-6bd7bf013448" containerName="controller-manager" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.715586 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa521aa-03d6-4557-b59a-6bd7bf013448" containerName="controller-manager" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.715742 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e27a4e-76a8-41f9-be9e-74301f1a5f3b" containerName="route-controller-manager" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.715771 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa521aa-03d6-4557-b59a-6bd7bf013448" containerName="controller-manager" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.716372 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.719964 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.720206 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.720955 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.721480 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.721487 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.721641 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.732756 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.735782 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb"] Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.736886 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.739192 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.740043 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.740463 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.740546 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.740650 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.746160 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.746493 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6498c588b7-hwq5t"] Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.752290 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb"] Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.763527 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9918fe95-de3e-4508-9984-f24710ed75c2-config\") pod \"controller-manager-6498c588b7-hwq5t\" (UID: \"9918fe95-de3e-4508-9984-f24710ed75c2\") " pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.763641 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k526h\" (UniqueName: \"kubernetes.io/projected/c1cd92f6-acd8-4ff4-b018-3292cca01150-kube-api-access-k526h\") pod \"route-controller-manager-7fc555b58f-ct8gb\" (UID: \"c1cd92f6-acd8-4ff4-b018-3292cca01150\") " pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.763707 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9918fe95-de3e-4508-9984-f24710ed75c2-serving-cert\") pod \"controller-manager-6498c588b7-hwq5t\" (UID: \"9918fe95-de3e-4508-9984-f24710ed75c2\") " pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.763738 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9918fe95-de3e-4508-9984-f24710ed75c2-proxy-ca-bundles\") pod \"controller-manager-6498c588b7-hwq5t\" (UID: \"9918fe95-de3e-4508-9984-f24710ed75c2\") " pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.763797 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1cd92f6-acd8-4ff4-b018-3292cca01150-config\") pod \"route-controller-manager-7fc555b58f-ct8gb\" (UID: \"c1cd92f6-acd8-4ff4-b018-3292cca01150\") " pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.763936 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9918fe95-de3e-4508-9984-f24710ed75c2-client-ca\") pod \"controller-manager-6498c588b7-hwq5t\" (UID: \"9918fe95-de3e-4508-9984-f24710ed75c2\") " pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.764067 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1cd92f6-acd8-4ff4-b018-3292cca01150-client-ca\") pod \"route-controller-manager-7fc555b58f-ct8gb\" (UID: \"c1cd92f6-acd8-4ff4-b018-3292cca01150\") " pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.764111 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85jll\" (UniqueName: \"kubernetes.io/projected/9918fe95-de3e-4508-9984-f24710ed75c2-kube-api-access-85jll\") pod \"controller-manager-6498c588b7-hwq5t\" (UID: \"9918fe95-de3e-4508-9984-f24710ed75c2\") " pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.764493 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1cd92f6-acd8-4ff4-b018-3292cca01150-serving-cert\") pod \"route-controller-manager-7fc555b58f-ct8gb\" (UID: \"c1cd92f6-acd8-4ff4-b018-3292cca01150\") " pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.865392 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k526h\" (UniqueName: \"kubernetes.io/projected/c1cd92f6-acd8-4ff4-b018-3292cca01150-kube-api-access-k526h\") pod \"route-controller-manager-7fc555b58f-ct8gb\" (UID: \"c1cd92f6-acd8-4ff4-b018-3292cca01150\") " pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.865461 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9918fe95-de3e-4508-9984-f24710ed75c2-serving-cert\") pod \"controller-manager-6498c588b7-hwq5t\" (UID: \"9918fe95-de3e-4508-9984-f24710ed75c2\") " pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.865496 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9918fe95-de3e-4508-9984-f24710ed75c2-proxy-ca-bundles\") pod \"controller-manager-6498c588b7-hwq5t\" (UID: \"9918fe95-de3e-4508-9984-f24710ed75c2\") " pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.865516 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1cd92f6-acd8-4ff4-b018-3292cca01150-config\") pod \"route-controller-manager-7fc555b58f-ct8gb\" (UID: \"c1cd92f6-acd8-4ff4-b018-3292cca01150\") " pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.865531 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9918fe95-de3e-4508-9984-f24710ed75c2-client-ca\") pod \"controller-manager-6498c588b7-hwq5t\" (UID: \"9918fe95-de3e-4508-9984-f24710ed75c2\") " pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.865545 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1cd92f6-acd8-4ff4-b018-3292cca01150-client-ca\") pod \"route-controller-manager-7fc555b58f-ct8gb\" (UID: \"c1cd92f6-acd8-4ff4-b018-3292cca01150\") " pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.865584 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85jll\" (UniqueName: \"kubernetes.io/projected/9918fe95-de3e-4508-9984-f24710ed75c2-kube-api-access-85jll\") pod \"controller-manager-6498c588b7-hwq5t\" (UID: \"9918fe95-de3e-4508-9984-f24710ed75c2\") " pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.866648 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1cd92f6-acd8-4ff4-b018-3292cca01150-client-ca\") pod \"route-controller-manager-7fc555b58f-ct8gb\" (UID: \"c1cd92f6-acd8-4ff4-b018-3292cca01150\") " pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.865616 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1cd92f6-acd8-4ff4-b018-3292cca01150-serving-cert\") pod \"route-controller-manager-7fc555b58f-ct8gb\" (UID: \"c1cd92f6-acd8-4ff4-b018-3292cca01150\") " pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.866745 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9918fe95-de3e-4508-9984-f24710ed75c2-config\") pod \"controller-manager-6498c588b7-hwq5t\" (UID: \"9918fe95-de3e-4508-9984-f24710ed75c2\") " pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.866982 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9918fe95-de3e-4508-9984-f24710ed75c2-proxy-ca-bundles\") pod \"controller-manager-6498c588b7-hwq5t\" (UID: \"9918fe95-de3e-4508-9984-f24710ed75c2\") " pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.867274 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1cd92f6-acd8-4ff4-b018-3292cca01150-config\") pod \"route-controller-manager-7fc555b58f-ct8gb\" (UID: \"c1cd92f6-acd8-4ff4-b018-3292cca01150\") " pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.868530 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9918fe95-de3e-4508-9984-f24710ed75c2-client-ca\") pod \"controller-manager-6498c588b7-hwq5t\" (UID: \"9918fe95-de3e-4508-9984-f24710ed75c2\") " pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.868597 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9918fe95-de3e-4508-9984-f24710ed75c2-config\") pod \"controller-manager-6498c588b7-hwq5t\" (UID: \"9918fe95-de3e-4508-9984-f24710ed75c2\") " pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.873664 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9918fe95-de3e-4508-9984-f24710ed75c2-serving-cert\") pod \"controller-manager-6498c588b7-hwq5t\" (UID: \"9918fe95-de3e-4508-9984-f24710ed75c2\") " pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.876514 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1cd92f6-acd8-4ff4-b018-3292cca01150-serving-cert\") pod \"route-controller-manager-7fc555b58f-ct8gb\" (UID: \"c1cd92f6-acd8-4ff4-b018-3292cca01150\") " pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.884367 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85jll\" (UniqueName: \"kubernetes.io/projected/9918fe95-de3e-4508-9984-f24710ed75c2-kube-api-access-85jll\") pod \"controller-manager-6498c588b7-hwq5t\" (UID: \"9918fe95-de3e-4508-9984-f24710ed75c2\") " pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:50 crc kubenswrapper[4832]: I0131 04:48:50.893926 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k526h\" (UniqueName: \"kubernetes.io/projected/c1cd92f6-acd8-4ff4-b018-3292cca01150-kube-api-access-k526h\") pod \"route-controller-manager-7fc555b58f-ct8gb\" (UID: \"c1cd92f6-acd8-4ff4-b018-3292cca01150\") " pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:48:51 crc kubenswrapper[4832]: I0131 04:48:51.048066 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:51 crc kubenswrapper[4832]: I0131 04:48:51.074821 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:48:51 crc kubenswrapper[4832]: I0131 04:48:51.500153 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6498c588b7-hwq5t"] Jan 31 04:48:51 crc kubenswrapper[4832]: I0131 04:48:51.536792 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb"] Jan 31 04:48:51 crc kubenswrapper[4832]: I0131 04:48:51.866004 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2e27a4e-76a8-41f9-be9e-74301f1a5f3b" path="/var/lib/kubelet/pods/a2e27a4e-76a8-41f9-be9e-74301f1a5f3b/volumes" Jan 31 04:48:51 crc kubenswrapper[4832]: I0131 04:48:51.867167 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caa521aa-03d6-4557-b59a-6bd7bf013448" path="/var/lib/kubelet/pods/caa521aa-03d6-4557-b59a-6bd7bf013448/volumes" Jan 31 04:48:52 crc kubenswrapper[4832]: I0131 04:48:52.056161 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" event={"ID":"9918fe95-de3e-4508-9984-f24710ed75c2","Type":"ContainerStarted","Data":"b73054314f142e8bc76c2edf51dd1cd0500d25b88aa42ba74b2f7d1376e2861f"} Jan 31 04:48:52 crc kubenswrapper[4832]: I0131 04:48:52.056213 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" event={"ID":"9918fe95-de3e-4508-9984-f24710ed75c2","Type":"ContainerStarted","Data":"28c6d5523b78a702bec1518034269dd49e268710d13c93acedc262ec31611f63"} Jan 31 04:48:52 crc kubenswrapper[4832]: I0131 04:48:52.056456 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:52 crc kubenswrapper[4832]: I0131 04:48:52.057258 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" event={"ID":"c1cd92f6-acd8-4ff4-b018-3292cca01150","Type":"ContainerStarted","Data":"771477d7fd942bb249e94ad7d7a16e31938d385b49949ee1fa797569b4e2790e"} Jan 31 04:48:52 crc kubenswrapper[4832]: I0131 04:48:52.057279 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" event={"ID":"c1cd92f6-acd8-4ff4-b018-3292cca01150","Type":"ContainerStarted","Data":"fd0395a091fd356d1c77346d61d365ee122f6b86bcd9b4d25b578a2235c48e02"} Jan 31 04:48:52 crc kubenswrapper[4832]: I0131 04:48:52.057484 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:48:52 crc kubenswrapper[4832]: I0131 04:48:52.061107 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" Jan 31 04:48:52 crc kubenswrapper[4832]: I0131 04:48:52.071663 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6498c588b7-hwq5t" podStartSLOduration=4.071647544 podStartE2EDuration="4.071647544s" podCreationTimestamp="2026-01-31 04:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:48:52.070373935 +0000 UTC m=+341.019195610" watchObservedRunningTime="2026-01-31 04:48:52.071647544 +0000 UTC m=+341.020469229" Jan 31 04:48:52 crc kubenswrapper[4832]: I0131 04:48:52.122687 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" podStartSLOduration=4.122667074 podStartE2EDuration="4.122667074s" podCreationTimestamp="2026-01-31 04:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:48:52.117331997 +0000 UTC m=+341.066153682" watchObservedRunningTime="2026-01-31 04:48:52.122667074 +0000 UTC m=+341.071488759" Jan 31 04:48:52 crc kubenswrapper[4832]: I0131 04:48:52.355603 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7fc555b58f-ct8gb" Jan 31 04:49:03 crc kubenswrapper[4832]: I0131 04:49:03.177788 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-zjw2c" Jan 31 04:49:03 crc kubenswrapper[4832]: I0131 04:49:03.259198 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bj2zs"] Jan 31 04:49:18 crc kubenswrapper[4832]: I0131 04:49:18.539866 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:49:18 crc kubenswrapper[4832]: I0131 04:49:18.540556 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.081391 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5sr9c"] Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.081980 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5sr9c" podUID="383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" containerName="registry-server" containerID="cri-o://f8d6f7016f68a7a99e041f58a034220a881b25bd2a79ca9ac5e9ca8670f90209" gracePeriod=30 Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.084923 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f26d5"] Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.085080 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f26d5" podUID="fadd223c-2d95-4429-be17-6f15be7dbbbc" containerName="registry-server" containerID="cri-o://66fe5b8a177ba3b91f49f50cce134336e894b89d0c895c326c2405f0ccafd77d" gracePeriod=30 Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.102389 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vg9j4"] Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.102778 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" podUID="305e29e9-933c-4098-a650-7d06eacb2ed6" containerName="marketplace-operator" containerID="cri-o://46ff08af552e365a1e990234233dd82faf7d5211370aca4d6d64399bb93c4038" gracePeriod=30 Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.132118 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4bzcs"] Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.152580 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tglv7"] Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.154127 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.168919 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s577w"] Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.170086 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s577w" podUID="5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" containerName="registry-server" containerID="cri-o://6f366ead54192a52b88fb0b9d46ffdf0943fec04026f51a06990766fa82d56c9" gracePeriod=30 Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.177740 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tglv7"] Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.195967 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a23cd15a-ae33-49a9-bf22-0f0e4786b18f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tglv7\" (UID: \"a23cd15a-ae33-49a9-bf22-0f0e4786b18f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.196223 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb2xb\" (UniqueName: \"kubernetes.io/projected/a23cd15a-ae33-49a9-bf22-0f0e4786b18f-kube-api-access-fb2xb\") pod \"marketplace-operator-79b997595-tglv7\" (UID: \"a23cd15a-ae33-49a9-bf22-0f0e4786b18f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.196363 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a23cd15a-ae33-49a9-bf22-0f0e4786b18f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tglv7\" (UID: \"a23cd15a-ae33-49a9-bf22-0f0e4786b18f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.271147 4832 generic.go:334] "Generic (PLEG): container finished" podID="305e29e9-933c-4098-a650-7d06eacb2ed6" containerID="46ff08af552e365a1e990234233dd82faf7d5211370aca4d6d64399bb93c4038" exitCode=0 Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.271238 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" event={"ID":"305e29e9-933c-4098-a650-7d06eacb2ed6","Type":"ContainerDied","Data":"46ff08af552e365a1e990234233dd82faf7d5211370aca4d6d64399bb93c4038"} Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.271841 4832 scope.go:117] "RemoveContainer" containerID="321dd495da81547de5f9bf345f631af09d381f70394f2a266f352173ffa10968" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.290805 4832 generic.go:334] "Generic (PLEG): container finished" podID="fadd223c-2d95-4429-be17-6f15be7dbbbc" containerID="66fe5b8a177ba3b91f49f50cce134336e894b89d0c895c326c2405f0ccafd77d" exitCode=0 Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.290875 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f26d5" event={"ID":"fadd223c-2d95-4429-be17-6f15be7dbbbc","Type":"ContainerDied","Data":"66fe5b8a177ba3b91f49f50cce134336e894b89d0c895c326c2405f0ccafd77d"} Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.297907 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a23cd15a-ae33-49a9-bf22-0f0e4786b18f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tglv7\" (UID: \"a23cd15a-ae33-49a9-bf22-0f0e4786b18f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.297946 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb2xb\" (UniqueName: \"kubernetes.io/projected/a23cd15a-ae33-49a9-bf22-0f0e4786b18f-kube-api-access-fb2xb\") pod \"marketplace-operator-79b997595-tglv7\" (UID: \"a23cd15a-ae33-49a9-bf22-0f0e4786b18f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.297991 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a23cd15a-ae33-49a9-bf22-0f0e4786b18f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tglv7\" (UID: \"a23cd15a-ae33-49a9-bf22-0f0e4786b18f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.299741 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a23cd15a-ae33-49a9-bf22-0f0e4786b18f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tglv7\" (UID: \"a23cd15a-ae33-49a9-bf22-0f0e4786b18f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.305542 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a23cd15a-ae33-49a9-bf22-0f0e4786b18f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tglv7\" (UID: \"a23cd15a-ae33-49a9-bf22-0f0e4786b18f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.320136 4832 generic.go:334] "Generic (PLEG): container finished" podID="383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" containerID="f8d6f7016f68a7a99e041f58a034220a881b25bd2a79ca9ac5e9ca8670f90209" exitCode=0 Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.320424 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4bzcs" podUID="1c79b0fe-2283-47ce-a36d-800a09a3f29e" containerName="registry-server" containerID="cri-o://25af3f832c74656b141ec093db206fc247fa7aa4a8386868edf119b665c3907f" gracePeriod=30 Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.320712 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sr9c" event={"ID":"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5","Type":"ContainerDied","Data":"f8d6f7016f68a7a99e041f58a034220a881b25bd2a79ca9ac5e9ca8670f90209"} Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.326432 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb2xb\" (UniqueName: \"kubernetes.io/projected/a23cd15a-ae33-49a9-bf22-0f0e4786b18f-kube-api-access-fb2xb\") pod \"marketplace-operator-79b997595-tglv7\" (UID: \"a23cd15a-ae33-49a9-bf22-0f0e4786b18f\") " pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.591762 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.598971 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.704471 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj49g\" (UniqueName: \"kubernetes.io/projected/fadd223c-2d95-4429-be17-6f15be7dbbbc-kube-api-access-nj49g\") pod \"fadd223c-2d95-4429-be17-6f15be7dbbbc\" (UID: \"fadd223c-2d95-4429-be17-6f15be7dbbbc\") " Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.704595 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fadd223c-2d95-4429-be17-6f15be7dbbbc-utilities\") pod \"fadd223c-2d95-4429-be17-6f15be7dbbbc\" (UID: \"fadd223c-2d95-4429-be17-6f15be7dbbbc\") " Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.704628 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fadd223c-2d95-4429-be17-6f15be7dbbbc-catalog-content\") pod \"fadd223c-2d95-4429-be17-6f15be7dbbbc\" (UID: \"fadd223c-2d95-4429-be17-6f15be7dbbbc\") " Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.709103 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fadd223c-2d95-4429-be17-6f15be7dbbbc-utilities" (OuterVolumeSpecName: "utilities") pod "fadd223c-2d95-4429-be17-6f15be7dbbbc" (UID: "fadd223c-2d95-4429-be17-6f15be7dbbbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.712130 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fadd223c-2d95-4429-be17-6f15be7dbbbc-kube-api-access-nj49g" (OuterVolumeSpecName: "kube-api-access-nj49g") pod "fadd223c-2d95-4429-be17-6f15be7dbbbc" (UID: "fadd223c-2d95-4429-be17-6f15be7dbbbc"). InnerVolumeSpecName "kube-api-access-nj49g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.784354 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fadd223c-2d95-4429-be17-6f15be7dbbbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fadd223c-2d95-4429-be17-6f15be7dbbbc" (UID: "fadd223c-2d95-4429-be17-6f15be7dbbbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:49:22 crc kubenswrapper[4832]: E0131 04:49:22.808885 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f8d6f7016f68a7a99e041f58a034220a881b25bd2a79ca9ac5e9ca8670f90209 is running failed: container process not found" containerID="f8d6f7016f68a7a99e041f58a034220a881b25bd2a79ca9ac5e9ca8670f90209" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 04:49:22 crc kubenswrapper[4832]: E0131 04:49:22.809362 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f8d6f7016f68a7a99e041f58a034220a881b25bd2a79ca9ac5e9ca8670f90209 is running failed: container process not found" containerID="f8d6f7016f68a7a99e041f58a034220a881b25bd2a79ca9ac5e9ca8670f90209" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.809713 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:49:22 crc kubenswrapper[4832]: E0131 04:49:22.810286 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f8d6f7016f68a7a99e041f58a034220a881b25bd2a79ca9ac5e9ca8670f90209 is running failed: container process not found" containerID="f8d6f7016f68a7a99e041f58a034220a881b25bd2a79ca9ac5e9ca8670f90209" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 04:49:22 crc kubenswrapper[4832]: E0131 04:49:22.810353 4832 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f8d6f7016f68a7a99e041f58a034220a881b25bd2a79ca9ac5e9ca8670f90209 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-5sr9c" podUID="383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" containerName="registry-server" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.818758 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fadd223c-2d95-4429-be17-6f15be7dbbbc-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.818779 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fadd223c-2d95-4429-be17-6f15be7dbbbc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.818791 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj49g\" (UniqueName: \"kubernetes.io/projected/fadd223c-2d95-4429-be17-6f15be7dbbbc-kube-api-access-nj49g\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.837953 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.847545 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.919343 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-catalog-content\") pod \"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8\" (UID: \"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8\") " Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.919407 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h65qw\" (UniqueName: \"kubernetes.io/projected/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-kube-api-access-h65qw\") pod \"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8\" (UID: \"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8\") " Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.919470 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j45hs\" (UniqueName: \"kubernetes.io/projected/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-kube-api-access-j45hs\") pod \"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5\" (UID: \"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5\") " Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.919490 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59n2j\" (UniqueName: \"kubernetes.io/projected/305e29e9-933c-4098-a650-7d06eacb2ed6-kube-api-access-59n2j\") pod \"305e29e9-933c-4098-a650-7d06eacb2ed6\" (UID: \"305e29e9-933c-4098-a650-7d06eacb2ed6\") " Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.919514 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/305e29e9-933c-4098-a650-7d06eacb2ed6-marketplace-trusted-ca\") pod \"305e29e9-933c-4098-a650-7d06eacb2ed6\" (UID: \"305e29e9-933c-4098-a650-7d06eacb2ed6\") " Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.919532 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-utilities\") pod \"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5\" (UID: \"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5\") " Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.920364 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/305e29e9-933c-4098-a650-7d06eacb2ed6-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "305e29e9-933c-4098-a650-7d06eacb2ed6" (UID: "305e29e9-933c-4098-a650-7d06eacb2ed6"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.920654 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-utilities" (OuterVolumeSpecName: "utilities") pod "383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" (UID: "383f7aea-cd36-47b8-8a08-fbb8a60e9ab5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.920703 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/305e29e9-933c-4098-a650-7d06eacb2ed6-marketplace-operator-metrics\") pod \"305e29e9-933c-4098-a650-7d06eacb2ed6\" (UID: \"305e29e9-933c-4098-a650-7d06eacb2ed6\") " Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.920748 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-catalog-content\") pod \"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5\" (UID: \"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5\") " Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.921809 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-utilities" (OuterVolumeSpecName: "utilities") pod "5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" (UID: "5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.922968 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-kube-api-access-h65qw" (OuterVolumeSpecName: "kube-api-access-h65qw") pod "5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" (UID: "5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8"). InnerVolumeSpecName "kube-api-access-h65qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.923449 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-kube-api-access-j45hs" (OuterVolumeSpecName: "kube-api-access-j45hs") pod "383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" (UID: "383f7aea-cd36-47b8-8a08-fbb8a60e9ab5"). InnerVolumeSpecName "kube-api-access-j45hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.924111 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305e29e9-933c-4098-a650-7d06eacb2ed6-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "305e29e9-933c-4098-a650-7d06eacb2ed6" (UID: "305e29e9-933c-4098-a650-7d06eacb2ed6"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.928702 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.929513 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305e29e9-933c-4098-a650-7d06eacb2ed6-kube-api-access-59n2j" (OuterVolumeSpecName: "kube-api-access-59n2j") pod "305e29e9-933c-4098-a650-7d06eacb2ed6" (UID: "305e29e9-933c-4098-a650-7d06eacb2ed6"). InnerVolumeSpecName "kube-api-access-59n2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.920786 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-utilities\") pod \"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8\" (UID: \"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8\") " Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.937131 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/305e29e9-933c-4098-a650-7d06eacb2ed6-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.937168 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.937239 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h65qw\" (UniqueName: \"kubernetes.io/projected/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-kube-api-access-h65qw\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.937249 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j45hs\" (UniqueName: \"kubernetes.io/projected/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-kube-api-access-j45hs\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.937257 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59n2j\" (UniqueName: \"kubernetes.io/projected/305e29e9-933c-4098-a650-7d06eacb2ed6-kube-api-access-59n2j\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.937268 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/305e29e9-933c-4098-a650-7d06eacb2ed6-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.937276 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:22 crc kubenswrapper[4832]: I0131 04:49:22.984289 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" (UID: "383f7aea-cd36-47b8-8a08-fbb8a60e9ab5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.038182 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c79b0fe-2283-47ce-a36d-800a09a3f29e-utilities\") pod \"1c79b0fe-2283-47ce-a36d-800a09a3f29e\" (UID: \"1c79b0fe-2283-47ce-a36d-800a09a3f29e\") " Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.038320 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c49t\" (UniqueName: \"kubernetes.io/projected/1c79b0fe-2283-47ce-a36d-800a09a3f29e-kube-api-access-8c49t\") pod \"1c79b0fe-2283-47ce-a36d-800a09a3f29e\" (UID: \"1c79b0fe-2283-47ce-a36d-800a09a3f29e\") " Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.038350 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c79b0fe-2283-47ce-a36d-800a09a3f29e-catalog-content\") pod \"1c79b0fe-2283-47ce-a36d-800a09a3f29e\" (UID: \"1c79b0fe-2283-47ce-a36d-800a09a3f29e\") " Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.038610 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.038924 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c79b0fe-2283-47ce-a36d-800a09a3f29e-utilities" (OuterVolumeSpecName: "utilities") pod "1c79b0fe-2283-47ce-a36d-800a09a3f29e" (UID: "1c79b0fe-2283-47ce-a36d-800a09a3f29e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.041473 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c79b0fe-2283-47ce-a36d-800a09a3f29e-kube-api-access-8c49t" (OuterVolumeSpecName: "kube-api-access-8c49t") pod "1c79b0fe-2283-47ce-a36d-800a09a3f29e" (UID: "1c79b0fe-2283-47ce-a36d-800a09a3f29e"). InnerVolumeSpecName "kube-api-access-8c49t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.049026 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" (UID: "5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.059743 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c79b0fe-2283-47ce-a36d-800a09a3f29e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c79b0fe-2283-47ce-a36d-800a09a3f29e" (UID: "1c79b0fe-2283-47ce-a36d-800a09a3f29e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.139574 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c79b0fe-2283-47ce-a36d-800a09a3f29e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.139611 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c79b0fe-2283-47ce-a36d-800a09a3f29e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.139623 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.139637 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c49t\" (UniqueName: \"kubernetes.io/projected/1c79b0fe-2283-47ce-a36d-800a09a3f29e-kube-api-access-8c49t\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.144919 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tglv7"] Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.330281 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f26d5" event={"ID":"fadd223c-2d95-4429-be17-6f15be7dbbbc","Type":"ContainerDied","Data":"594181eac747b298e40d2e7fbedf0cdd5201f9f30aa359875588ede64b420315"} Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.330364 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f26d5" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.330767 4832 scope.go:117] "RemoveContainer" containerID="66fe5b8a177ba3b91f49f50cce134336e894b89d0c895c326c2405f0ccafd77d" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.334762 4832 generic.go:334] "Generic (PLEG): container finished" podID="5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" containerID="6f366ead54192a52b88fb0b9d46ffdf0943fec04026f51a06990766fa82d56c9" exitCode=0 Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.334821 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s577w" event={"ID":"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8","Type":"ContainerDied","Data":"6f366ead54192a52b88fb0b9d46ffdf0943fec04026f51a06990766fa82d56c9"} Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.334847 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s577w" event={"ID":"5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8","Type":"ContainerDied","Data":"70e06639d13f47a61cfcb795c4d4e3ff77ef0a8b20b0611d45ac7985078f99cc"} Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.334941 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s577w" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.337545 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" event={"ID":"a23cd15a-ae33-49a9-bf22-0f0e4786b18f","Type":"ContainerStarted","Data":"52328b569d2d8b6e140b2c080d6dc63ca3eefb9ae0a3789b903113d41372a4c7"} Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.339177 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.340944 4832 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tglv7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" start-of-body= Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.340983 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" podUID="a23cd15a-ae33-49a9-bf22-0f0e4786b18f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.342671 4832 generic.go:334] "Generic (PLEG): container finished" podID="1c79b0fe-2283-47ce-a36d-800a09a3f29e" containerID="25af3f832c74656b141ec093db206fc247fa7aa4a8386868edf119b665c3907f" exitCode=0 Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.342827 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bzcs" event={"ID":"1c79b0fe-2283-47ce-a36d-800a09a3f29e","Type":"ContainerDied","Data":"25af3f832c74656b141ec093db206fc247fa7aa4a8386868edf119b665c3907f"} Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.342924 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bzcs" event={"ID":"1c79b0fe-2283-47ce-a36d-800a09a3f29e","Type":"ContainerDied","Data":"f6fed4e35db88f547680cc1aa8e5a1cbd7320214f510c968891e5fbe134acd3d"} Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.343181 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4bzcs" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.348813 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.349289 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vg9j4" event={"ID":"305e29e9-933c-4098-a650-7d06eacb2ed6","Type":"ContainerDied","Data":"8061de41dc6d487abcc22f2cb141011c6d238cc2aee2d3978b5621574666c027"} Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.352228 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sr9c" event={"ID":"383f7aea-cd36-47b8-8a08-fbb8a60e9ab5","Type":"ContainerDied","Data":"888ed1e38fa5f463b77074a38e1b6d11acdc8bf559c476e8502cafc32df85e0b"} Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.352651 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5sr9c" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.362607 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" podStartSLOduration=1.362555388 podStartE2EDuration="1.362555388s" podCreationTimestamp="2026-01-31 04:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:49:23.355816028 +0000 UTC m=+372.304637733" watchObservedRunningTime="2026-01-31 04:49:23.362555388 +0000 UTC m=+372.311377073" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.364156 4832 scope.go:117] "RemoveContainer" containerID="297ba5c9ffff3caae183ccde1505191ade6c21258993d7a5942606f8051c7a2f" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.401303 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s577w"] Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.406550 4832 scope.go:117] "RemoveContainer" containerID="e0e1de7c46c907e873b65bfdd3a08083456a2d2d4601e36ede6b7e350ca209c4" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.406831 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s577w"] Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.412461 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f26d5"] Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.415032 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f26d5"] Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.425597 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vg9j4"] Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.437785 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vg9j4"] Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.441097 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4bzcs"] Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.443263 4832 scope.go:117] "RemoveContainer" containerID="6f366ead54192a52b88fb0b9d46ffdf0943fec04026f51a06990766fa82d56c9" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.444162 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4bzcs"] Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.446882 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5sr9c"] Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.450498 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5sr9c"] Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.456631 4832 scope.go:117] "RemoveContainer" containerID="fbb6e37918034c539953b944175b91aa8b3c029c8350a3688b370289ec8c282e" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.481938 4832 scope.go:117] "RemoveContainer" containerID="c96f18ad60a46bb7ed00e9e9c8339511e1e2c6290a73f1ddc72fc00b070e9430" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.504545 4832 scope.go:117] "RemoveContainer" containerID="6f366ead54192a52b88fb0b9d46ffdf0943fec04026f51a06990766fa82d56c9" Jan 31 04:49:23 crc kubenswrapper[4832]: E0131 04:49:23.505400 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f366ead54192a52b88fb0b9d46ffdf0943fec04026f51a06990766fa82d56c9\": container with ID starting with 6f366ead54192a52b88fb0b9d46ffdf0943fec04026f51a06990766fa82d56c9 not found: ID does not exist" containerID="6f366ead54192a52b88fb0b9d46ffdf0943fec04026f51a06990766fa82d56c9" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.505447 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f366ead54192a52b88fb0b9d46ffdf0943fec04026f51a06990766fa82d56c9"} err="failed to get container status \"6f366ead54192a52b88fb0b9d46ffdf0943fec04026f51a06990766fa82d56c9\": rpc error: code = NotFound desc = could not find container \"6f366ead54192a52b88fb0b9d46ffdf0943fec04026f51a06990766fa82d56c9\": container with ID starting with 6f366ead54192a52b88fb0b9d46ffdf0943fec04026f51a06990766fa82d56c9 not found: ID does not exist" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.505479 4832 scope.go:117] "RemoveContainer" containerID="fbb6e37918034c539953b944175b91aa8b3c029c8350a3688b370289ec8c282e" Jan 31 04:49:23 crc kubenswrapper[4832]: E0131 04:49:23.506855 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbb6e37918034c539953b944175b91aa8b3c029c8350a3688b370289ec8c282e\": container with ID starting with fbb6e37918034c539953b944175b91aa8b3c029c8350a3688b370289ec8c282e not found: ID does not exist" containerID="fbb6e37918034c539953b944175b91aa8b3c029c8350a3688b370289ec8c282e" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.506918 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb6e37918034c539953b944175b91aa8b3c029c8350a3688b370289ec8c282e"} err="failed to get container status \"fbb6e37918034c539953b944175b91aa8b3c029c8350a3688b370289ec8c282e\": rpc error: code = NotFound desc = could not find container \"fbb6e37918034c539953b944175b91aa8b3c029c8350a3688b370289ec8c282e\": container with ID starting with fbb6e37918034c539953b944175b91aa8b3c029c8350a3688b370289ec8c282e not found: ID does not exist" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.506939 4832 scope.go:117] "RemoveContainer" containerID="c96f18ad60a46bb7ed00e9e9c8339511e1e2c6290a73f1ddc72fc00b070e9430" Jan 31 04:49:23 crc kubenswrapper[4832]: E0131 04:49:23.509925 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c96f18ad60a46bb7ed00e9e9c8339511e1e2c6290a73f1ddc72fc00b070e9430\": container with ID starting with c96f18ad60a46bb7ed00e9e9c8339511e1e2c6290a73f1ddc72fc00b070e9430 not found: ID does not exist" containerID="c96f18ad60a46bb7ed00e9e9c8339511e1e2c6290a73f1ddc72fc00b070e9430" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.509960 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c96f18ad60a46bb7ed00e9e9c8339511e1e2c6290a73f1ddc72fc00b070e9430"} err="failed to get container status \"c96f18ad60a46bb7ed00e9e9c8339511e1e2c6290a73f1ddc72fc00b070e9430\": rpc error: code = NotFound desc = could not find container \"c96f18ad60a46bb7ed00e9e9c8339511e1e2c6290a73f1ddc72fc00b070e9430\": container with ID starting with c96f18ad60a46bb7ed00e9e9c8339511e1e2c6290a73f1ddc72fc00b070e9430 not found: ID does not exist" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.509977 4832 scope.go:117] "RemoveContainer" containerID="25af3f832c74656b141ec093db206fc247fa7aa4a8386868edf119b665c3907f" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.523095 4832 scope.go:117] "RemoveContainer" containerID="29619407fc95baf7a31cf6818cde992d9d490a90e4366e0c97d5da9e2146f763" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.541826 4832 scope.go:117] "RemoveContainer" containerID="88932ca5f05cb7cba3f9e29ac1b8aadb341ec1741b8c30adde1e54f2bb3c4f4c" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.561690 4832 scope.go:117] "RemoveContainer" containerID="25af3f832c74656b141ec093db206fc247fa7aa4a8386868edf119b665c3907f" Jan 31 04:49:23 crc kubenswrapper[4832]: E0131 04:49:23.562570 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25af3f832c74656b141ec093db206fc247fa7aa4a8386868edf119b665c3907f\": container with ID starting with 25af3f832c74656b141ec093db206fc247fa7aa4a8386868edf119b665c3907f not found: ID does not exist" containerID="25af3f832c74656b141ec093db206fc247fa7aa4a8386868edf119b665c3907f" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.562612 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25af3f832c74656b141ec093db206fc247fa7aa4a8386868edf119b665c3907f"} err="failed to get container status \"25af3f832c74656b141ec093db206fc247fa7aa4a8386868edf119b665c3907f\": rpc error: code = NotFound desc = could not find container \"25af3f832c74656b141ec093db206fc247fa7aa4a8386868edf119b665c3907f\": container with ID starting with 25af3f832c74656b141ec093db206fc247fa7aa4a8386868edf119b665c3907f not found: ID does not exist" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.562639 4832 scope.go:117] "RemoveContainer" containerID="29619407fc95baf7a31cf6818cde992d9d490a90e4366e0c97d5da9e2146f763" Jan 31 04:49:23 crc kubenswrapper[4832]: E0131 04:49:23.563089 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29619407fc95baf7a31cf6818cde992d9d490a90e4366e0c97d5da9e2146f763\": container with ID starting with 29619407fc95baf7a31cf6818cde992d9d490a90e4366e0c97d5da9e2146f763 not found: ID does not exist" containerID="29619407fc95baf7a31cf6818cde992d9d490a90e4366e0c97d5da9e2146f763" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.563135 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29619407fc95baf7a31cf6818cde992d9d490a90e4366e0c97d5da9e2146f763"} err="failed to get container status \"29619407fc95baf7a31cf6818cde992d9d490a90e4366e0c97d5da9e2146f763\": rpc error: code = NotFound desc = could not find container \"29619407fc95baf7a31cf6818cde992d9d490a90e4366e0c97d5da9e2146f763\": container with ID starting with 29619407fc95baf7a31cf6818cde992d9d490a90e4366e0c97d5da9e2146f763 not found: ID does not exist" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.563176 4832 scope.go:117] "RemoveContainer" containerID="88932ca5f05cb7cba3f9e29ac1b8aadb341ec1741b8c30adde1e54f2bb3c4f4c" Jan 31 04:49:23 crc kubenswrapper[4832]: E0131 04:49:23.563453 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88932ca5f05cb7cba3f9e29ac1b8aadb341ec1741b8c30adde1e54f2bb3c4f4c\": container with ID starting with 88932ca5f05cb7cba3f9e29ac1b8aadb341ec1741b8c30adde1e54f2bb3c4f4c not found: ID does not exist" containerID="88932ca5f05cb7cba3f9e29ac1b8aadb341ec1741b8c30adde1e54f2bb3c4f4c" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.563478 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88932ca5f05cb7cba3f9e29ac1b8aadb341ec1741b8c30adde1e54f2bb3c4f4c"} err="failed to get container status \"88932ca5f05cb7cba3f9e29ac1b8aadb341ec1741b8c30adde1e54f2bb3c4f4c\": rpc error: code = NotFound desc = could not find container \"88932ca5f05cb7cba3f9e29ac1b8aadb341ec1741b8c30adde1e54f2bb3c4f4c\": container with ID starting with 88932ca5f05cb7cba3f9e29ac1b8aadb341ec1741b8c30adde1e54f2bb3c4f4c not found: ID does not exist" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.563493 4832 scope.go:117] "RemoveContainer" containerID="46ff08af552e365a1e990234233dd82faf7d5211370aca4d6d64399bb93c4038" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.581866 4832 scope.go:117] "RemoveContainer" containerID="f8d6f7016f68a7a99e041f58a034220a881b25bd2a79ca9ac5e9ca8670f90209" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.597618 4832 scope.go:117] "RemoveContainer" containerID="f93c359f6f923444a5a66286732277e363cbdb5a973b3120ea1b549149fe3ba9" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.610419 4832 scope.go:117] "RemoveContainer" containerID="b95ae123252ef8d2c53065b2e3f18618ef7ed93e8e1b76e980dcf18f1dc5456a" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.866116 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c79b0fe-2283-47ce-a36d-800a09a3f29e" path="/var/lib/kubelet/pods/1c79b0fe-2283-47ce-a36d-800a09a3f29e/volumes" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.866913 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="305e29e9-933c-4098-a650-7d06eacb2ed6" path="/var/lib/kubelet/pods/305e29e9-933c-4098-a650-7d06eacb2ed6/volumes" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.867518 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" path="/var/lib/kubelet/pods/383f7aea-cd36-47b8-8a08-fbb8a60e9ab5/volumes" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.868769 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" path="/var/lib/kubelet/pods/5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8/volumes" Jan 31 04:49:23 crc kubenswrapper[4832]: I0131 04:49:23.869476 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fadd223c-2d95-4429-be17-6f15be7dbbbc" path="/var/lib/kubelet/pods/fadd223c-2d95-4429-be17-6f15be7dbbbc/volumes" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.193728 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qgrt2"] Jan 31 04:49:24 crc kubenswrapper[4832]: E0131 04:49:24.193942 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fadd223c-2d95-4429-be17-6f15be7dbbbc" containerName="extract-content" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.193954 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fadd223c-2d95-4429-be17-6f15be7dbbbc" containerName="extract-content" Jan 31 04:49:24 crc kubenswrapper[4832]: E0131 04:49:24.193965 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305e29e9-933c-4098-a650-7d06eacb2ed6" containerName="marketplace-operator" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.193971 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="305e29e9-933c-4098-a650-7d06eacb2ed6" containerName="marketplace-operator" Jan 31 04:49:24 crc kubenswrapper[4832]: E0131 04:49:24.193980 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" containerName="extract-utilities" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.193985 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" containerName="extract-utilities" Jan 31 04:49:24 crc kubenswrapper[4832]: E0131 04:49:24.193992 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" containerName="extract-content" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.193998 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" containerName="extract-content" Jan 31 04:49:24 crc kubenswrapper[4832]: E0131 04:49:24.194006 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fadd223c-2d95-4429-be17-6f15be7dbbbc" containerName="extract-utilities" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194013 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fadd223c-2d95-4429-be17-6f15be7dbbbc" containerName="extract-utilities" Jan 31 04:49:24 crc kubenswrapper[4832]: E0131 04:49:24.194025 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" containerName="extract-utilities" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194030 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" containerName="extract-utilities" Jan 31 04:49:24 crc kubenswrapper[4832]: E0131 04:49:24.194041 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" containerName="registry-server" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194047 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" containerName="registry-server" Jan 31 04:49:24 crc kubenswrapper[4832]: E0131 04:49:24.194055 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c79b0fe-2283-47ce-a36d-800a09a3f29e" containerName="extract-content" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194061 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c79b0fe-2283-47ce-a36d-800a09a3f29e" containerName="extract-content" Jan 31 04:49:24 crc kubenswrapper[4832]: E0131 04:49:24.194069 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fadd223c-2d95-4429-be17-6f15be7dbbbc" containerName="registry-server" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194075 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fadd223c-2d95-4429-be17-6f15be7dbbbc" containerName="registry-server" Jan 31 04:49:24 crc kubenswrapper[4832]: E0131 04:49:24.194083 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c79b0fe-2283-47ce-a36d-800a09a3f29e" containerName="extract-utilities" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194088 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c79b0fe-2283-47ce-a36d-800a09a3f29e" containerName="extract-utilities" Jan 31 04:49:24 crc kubenswrapper[4832]: E0131 04:49:24.194096 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c79b0fe-2283-47ce-a36d-800a09a3f29e" containerName="registry-server" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194102 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c79b0fe-2283-47ce-a36d-800a09a3f29e" containerName="registry-server" Jan 31 04:49:24 crc kubenswrapper[4832]: E0131 04:49:24.194110 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" containerName="registry-server" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194117 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" containerName="registry-server" Jan 31 04:49:24 crc kubenswrapper[4832]: E0131 04:49:24.194125 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305e29e9-933c-4098-a650-7d06eacb2ed6" containerName="marketplace-operator" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194130 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="305e29e9-933c-4098-a650-7d06eacb2ed6" containerName="marketplace-operator" Jan 31 04:49:24 crc kubenswrapper[4832]: E0131 04:49:24.194137 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" containerName="extract-content" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194143 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" containerName="extract-content" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194231 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fadd223c-2d95-4429-be17-6f15be7dbbbc" containerName="registry-server" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194246 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="383f7aea-cd36-47b8-8a08-fbb8a60e9ab5" containerName="registry-server" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194254 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cb7c4a8-ed79-44d9-87c6-e06ca68a70d8" containerName="registry-server" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194261 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="305e29e9-933c-4098-a650-7d06eacb2ed6" containerName="marketplace-operator" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194267 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="305e29e9-933c-4098-a650-7d06eacb2ed6" containerName="marketplace-operator" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.194276 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c79b0fe-2283-47ce-a36d-800a09a3f29e" containerName="registry-server" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.195629 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgrt2" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.199091 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.205573 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgrt2"] Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.255755 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/271e0384-a8f3-41b0-a543-28210590699c-utilities\") pod \"community-operators-qgrt2\" (UID: \"271e0384-a8f3-41b0-a543-28210590699c\") " pod="openshift-marketplace/community-operators-qgrt2" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.255815 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bscj\" (UniqueName: \"kubernetes.io/projected/271e0384-a8f3-41b0-a543-28210590699c-kube-api-access-7bscj\") pod \"community-operators-qgrt2\" (UID: \"271e0384-a8f3-41b0-a543-28210590699c\") " pod="openshift-marketplace/community-operators-qgrt2" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.255837 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/271e0384-a8f3-41b0-a543-28210590699c-catalog-content\") pod \"community-operators-qgrt2\" (UID: \"271e0384-a8f3-41b0-a543-28210590699c\") " pod="openshift-marketplace/community-operators-qgrt2" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.357482 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bscj\" (UniqueName: \"kubernetes.io/projected/271e0384-a8f3-41b0-a543-28210590699c-kube-api-access-7bscj\") pod \"community-operators-qgrt2\" (UID: \"271e0384-a8f3-41b0-a543-28210590699c\") " pod="openshift-marketplace/community-operators-qgrt2" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.357536 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/271e0384-a8f3-41b0-a543-28210590699c-catalog-content\") pod \"community-operators-qgrt2\" (UID: \"271e0384-a8f3-41b0-a543-28210590699c\") " pod="openshift-marketplace/community-operators-qgrt2" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.358148 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/271e0384-a8f3-41b0-a543-28210590699c-catalog-content\") pod \"community-operators-qgrt2\" (UID: \"271e0384-a8f3-41b0-a543-28210590699c\") " pod="openshift-marketplace/community-operators-qgrt2" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.358232 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/271e0384-a8f3-41b0-a543-28210590699c-utilities\") pod \"community-operators-qgrt2\" (UID: \"271e0384-a8f3-41b0-a543-28210590699c\") " pod="openshift-marketplace/community-operators-qgrt2" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.358524 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/271e0384-a8f3-41b0-a543-28210590699c-utilities\") pod \"community-operators-qgrt2\" (UID: \"271e0384-a8f3-41b0-a543-28210590699c\") " pod="openshift-marketplace/community-operators-qgrt2" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.367225 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" event={"ID":"a23cd15a-ae33-49a9-bf22-0f0e4786b18f","Type":"ContainerStarted","Data":"8f66e5e9e2cdf690edadd428badba9e85298a798ae0a7c737fde66c41d4b7bfb"} Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.371038 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tglv7" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.392780 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bscj\" (UniqueName: \"kubernetes.io/projected/271e0384-a8f3-41b0-a543-28210590699c-kube-api-access-7bscj\") pod \"community-operators-qgrt2\" (UID: \"271e0384-a8f3-41b0-a543-28210590699c\") " pod="openshift-marketplace/community-operators-qgrt2" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.403127 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-crxls"] Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.404306 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crxls" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.409988 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.450780 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crxls"] Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.548151 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgrt2" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.564724 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f544b6-248c-4f10-8c30-4a976fb6a35c-utilities\") pod \"certified-operators-crxls\" (UID: \"f9f544b6-248c-4f10-8c30-4a976fb6a35c\") " pod="openshift-marketplace/certified-operators-crxls" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.564810 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrh9d\" (UniqueName: \"kubernetes.io/projected/f9f544b6-248c-4f10-8c30-4a976fb6a35c-kube-api-access-vrh9d\") pod \"certified-operators-crxls\" (UID: \"f9f544b6-248c-4f10-8c30-4a976fb6a35c\") " pod="openshift-marketplace/certified-operators-crxls" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.564868 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f544b6-248c-4f10-8c30-4a976fb6a35c-catalog-content\") pod \"certified-operators-crxls\" (UID: \"f9f544b6-248c-4f10-8c30-4a976fb6a35c\") " pod="openshift-marketplace/certified-operators-crxls" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.666709 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrh9d\" (UniqueName: \"kubernetes.io/projected/f9f544b6-248c-4f10-8c30-4a976fb6a35c-kube-api-access-vrh9d\") pod \"certified-operators-crxls\" (UID: \"f9f544b6-248c-4f10-8c30-4a976fb6a35c\") " pod="openshift-marketplace/certified-operators-crxls" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.666863 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f544b6-248c-4f10-8c30-4a976fb6a35c-catalog-content\") pod \"certified-operators-crxls\" (UID: \"f9f544b6-248c-4f10-8c30-4a976fb6a35c\") " pod="openshift-marketplace/certified-operators-crxls" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.666989 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f544b6-248c-4f10-8c30-4a976fb6a35c-utilities\") pod \"certified-operators-crxls\" (UID: \"f9f544b6-248c-4f10-8c30-4a976fb6a35c\") " pod="openshift-marketplace/certified-operators-crxls" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.667684 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f544b6-248c-4f10-8c30-4a976fb6a35c-utilities\") pod \"certified-operators-crxls\" (UID: \"f9f544b6-248c-4f10-8c30-4a976fb6a35c\") " pod="openshift-marketplace/certified-operators-crxls" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.667814 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f544b6-248c-4f10-8c30-4a976fb6a35c-catalog-content\") pod \"certified-operators-crxls\" (UID: \"f9f544b6-248c-4f10-8c30-4a976fb6a35c\") " pod="openshift-marketplace/certified-operators-crxls" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.697861 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrh9d\" (UniqueName: \"kubernetes.io/projected/f9f544b6-248c-4f10-8c30-4a976fb6a35c-kube-api-access-vrh9d\") pod \"certified-operators-crxls\" (UID: \"f9f544b6-248c-4f10-8c30-4a976fb6a35c\") " pod="openshift-marketplace/certified-operators-crxls" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.750950 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crxls" Jan 31 04:49:24 crc kubenswrapper[4832]: I0131 04:49:24.966091 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgrt2"] Jan 31 04:49:24 crc kubenswrapper[4832]: W0131 04:49:24.973085 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod271e0384_a8f3_41b0_a543_28210590699c.slice/crio-35ed721454b342ffa340abd0d7be048ebe3e0f0282ab2c75c2f7678b84e715d5 WatchSource:0}: Error finding container 35ed721454b342ffa340abd0d7be048ebe3e0f0282ab2c75c2f7678b84e715d5: Status 404 returned error can't find the container with id 35ed721454b342ffa340abd0d7be048ebe3e0f0282ab2c75c2f7678b84e715d5 Jan 31 04:49:25 crc kubenswrapper[4832]: I0131 04:49:25.201059 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crxls"] Jan 31 04:49:25 crc kubenswrapper[4832]: I0131 04:49:25.376135 4832 generic.go:334] "Generic (PLEG): container finished" podID="271e0384-a8f3-41b0-a543-28210590699c" containerID="9700154aa1eaac0c555b526c4998646bb1b3f3d46b826ef8478187c62f9c82b4" exitCode=0 Jan 31 04:49:25 crc kubenswrapper[4832]: I0131 04:49:25.376240 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgrt2" event={"ID":"271e0384-a8f3-41b0-a543-28210590699c","Type":"ContainerDied","Data":"9700154aa1eaac0c555b526c4998646bb1b3f3d46b826ef8478187c62f9c82b4"} Jan 31 04:49:25 crc kubenswrapper[4832]: I0131 04:49:25.376277 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgrt2" event={"ID":"271e0384-a8f3-41b0-a543-28210590699c","Type":"ContainerStarted","Data":"35ed721454b342ffa340abd0d7be048ebe3e0f0282ab2c75c2f7678b84e715d5"} Jan 31 04:49:25 crc kubenswrapper[4832]: I0131 04:49:25.381851 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crxls" event={"ID":"f9f544b6-248c-4f10-8c30-4a976fb6a35c","Type":"ContainerStarted","Data":"448774b9acd3df5be909c3711600acf08f3d5cab454c7b12edb6a130d81caedb"} Jan 31 04:49:25 crc kubenswrapper[4832]: I0131 04:49:25.381908 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crxls" event={"ID":"f9f544b6-248c-4f10-8c30-4a976fb6a35c","Type":"ContainerStarted","Data":"ac091f171f5cf470a7ae3209c2bb6f122f5bc705822d7b9ddb994642d4ab7486"} Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.390355 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgrt2" event={"ID":"271e0384-a8f3-41b0-a543-28210590699c","Type":"ContainerStarted","Data":"f3697dd06292e5b31e4ea052ba02d5a2b8edab4162ea0f406daaca353c70bfb8"} Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.392137 4832 generic.go:334] "Generic (PLEG): container finished" podID="f9f544b6-248c-4f10-8c30-4a976fb6a35c" containerID="448774b9acd3df5be909c3711600acf08f3d5cab454c7b12edb6a130d81caedb" exitCode=0 Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.392221 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crxls" event={"ID":"f9f544b6-248c-4f10-8c30-4a976fb6a35c","Type":"ContainerDied","Data":"448774b9acd3df5be909c3711600acf08f3d5cab454c7b12edb6a130d81caedb"} Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.601932 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cqt2r"] Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.604017 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqt2r" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.611660 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.617990 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqt2r"] Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.700046 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkfrg\" (UniqueName: \"kubernetes.io/projected/9a66457d-ec6e-439a-894d-a2ce2519bf0c-kube-api-access-nkfrg\") pod \"redhat-marketplace-cqt2r\" (UID: \"9a66457d-ec6e-439a-894d-a2ce2519bf0c\") " pod="openshift-marketplace/redhat-marketplace-cqt2r" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.700097 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a66457d-ec6e-439a-894d-a2ce2519bf0c-utilities\") pod \"redhat-marketplace-cqt2r\" (UID: \"9a66457d-ec6e-439a-894d-a2ce2519bf0c\") " pod="openshift-marketplace/redhat-marketplace-cqt2r" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.700725 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a66457d-ec6e-439a-894d-a2ce2519bf0c-catalog-content\") pod \"redhat-marketplace-cqt2r\" (UID: \"9a66457d-ec6e-439a-894d-a2ce2519bf0c\") " pod="openshift-marketplace/redhat-marketplace-cqt2r" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.800705 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zk6kb"] Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.802210 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkfrg\" (UniqueName: \"kubernetes.io/projected/9a66457d-ec6e-439a-894d-a2ce2519bf0c-kube-api-access-nkfrg\") pod \"redhat-marketplace-cqt2r\" (UID: \"9a66457d-ec6e-439a-894d-a2ce2519bf0c\") " pod="openshift-marketplace/redhat-marketplace-cqt2r" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.802274 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a66457d-ec6e-439a-894d-a2ce2519bf0c-utilities\") pod \"redhat-marketplace-cqt2r\" (UID: \"9a66457d-ec6e-439a-894d-a2ce2519bf0c\") " pod="openshift-marketplace/redhat-marketplace-cqt2r" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.802372 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a66457d-ec6e-439a-894d-a2ce2519bf0c-catalog-content\") pod \"redhat-marketplace-cqt2r\" (UID: \"9a66457d-ec6e-439a-894d-a2ce2519bf0c\") " pod="openshift-marketplace/redhat-marketplace-cqt2r" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.803095 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a66457d-ec6e-439a-894d-a2ce2519bf0c-catalog-content\") pod \"redhat-marketplace-cqt2r\" (UID: \"9a66457d-ec6e-439a-894d-a2ce2519bf0c\") " pod="openshift-marketplace/redhat-marketplace-cqt2r" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.803391 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a66457d-ec6e-439a-894d-a2ce2519bf0c-utilities\") pod \"redhat-marketplace-cqt2r\" (UID: \"9a66457d-ec6e-439a-894d-a2ce2519bf0c\") " pod="openshift-marketplace/redhat-marketplace-cqt2r" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.804074 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zk6kb" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.808097 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.812788 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zk6kb"] Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.825699 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkfrg\" (UniqueName: \"kubernetes.io/projected/9a66457d-ec6e-439a-894d-a2ce2519bf0c-kube-api-access-nkfrg\") pod \"redhat-marketplace-cqt2r\" (UID: \"9a66457d-ec6e-439a-894d-a2ce2519bf0c\") " pod="openshift-marketplace/redhat-marketplace-cqt2r" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.904207 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trrch\" (UniqueName: \"kubernetes.io/projected/68108ffd-eb09-4ae3-a4d4-0316d20d0feb-kube-api-access-trrch\") pod \"redhat-operators-zk6kb\" (UID: \"68108ffd-eb09-4ae3-a4d4-0316d20d0feb\") " pod="openshift-marketplace/redhat-operators-zk6kb" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.904472 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68108ffd-eb09-4ae3-a4d4-0316d20d0feb-catalog-content\") pod \"redhat-operators-zk6kb\" (UID: \"68108ffd-eb09-4ae3-a4d4-0316d20d0feb\") " pod="openshift-marketplace/redhat-operators-zk6kb" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.904636 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68108ffd-eb09-4ae3-a4d4-0316d20d0feb-utilities\") pod \"redhat-operators-zk6kb\" (UID: \"68108ffd-eb09-4ae3-a4d4-0316d20d0feb\") " pod="openshift-marketplace/redhat-operators-zk6kb" Jan 31 04:49:26 crc kubenswrapper[4832]: I0131 04:49:26.923907 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cqt2r" Jan 31 04:49:27 crc kubenswrapper[4832]: I0131 04:49:27.006266 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68108ffd-eb09-4ae3-a4d4-0316d20d0feb-catalog-content\") pod \"redhat-operators-zk6kb\" (UID: \"68108ffd-eb09-4ae3-a4d4-0316d20d0feb\") " pod="openshift-marketplace/redhat-operators-zk6kb" Jan 31 04:49:27 crc kubenswrapper[4832]: I0131 04:49:27.006395 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68108ffd-eb09-4ae3-a4d4-0316d20d0feb-utilities\") pod \"redhat-operators-zk6kb\" (UID: \"68108ffd-eb09-4ae3-a4d4-0316d20d0feb\") " pod="openshift-marketplace/redhat-operators-zk6kb" Jan 31 04:49:27 crc kubenswrapper[4832]: I0131 04:49:27.006504 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trrch\" (UniqueName: \"kubernetes.io/projected/68108ffd-eb09-4ae3-a4d4-0316d20d0feb-kube-api-access-trrch\") pod \"redhat-operators-zk6kb\" (UID: \"68108ffd-eb09-4ae3-a4d4-0316d20d0feb\") " pod="openshift-marketplace/redhat-operators-zk6kb" Jan 31 04:49:27 crc kubenswrapper[4832]: I0131 04:49:27.007100 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68108ffd-eb09-4ae3-a4d4-0316d20d0feb-catalog-content\") pod \"redhat-operators-zk6kb\" (UID: \"68108ffd-eb09-4ae3-a4d4-0316d20d0feb\") " pod="openshift-marketplace/redhat-operators-zk6kb" Jan 31 04:49:27 crc kubenswrapper[4832]: I0131 04:49:27.007628 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68108ffd-eb09-4ae3-a4d4-0316d20d0feb-utilities\") pod \"redhat-operators-zk6kb\" (UID: \"68108ffd-eb09-4ae3-a4d4-0316d20d0feb\") " pod="openshift-marketplace/redhat-operators-zk6kb" Jan 31 04:49:27 crc kubenswrapper[4832]: I0131 04:49:27.029832 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trrch\" (UniqueName: \"kubernetes.io/projected/68108ffd-eb09-4ae3-a4d4-0316d20d0feb-kube-api-access-trrch\") pod \"redhat-operators-zk6kb\" (UID: \"68108ffd-eb09-4ae3-a4d4-0316d20d0feb\") " pod="openshift-marketplace/redhat-operators-zk6kb" Jan 31 04:49:27 crc kubenswrapper[4832]: I0131 04:49:27.162353 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zk6kb" Jan 31 04:49:27 crc kubenswrapper[4832]: I0131 04:49:27.402238 4832 generic.go:334] "Generic (PLEG): container finished" podID="271e0384-a8f3-41b0-a543-28210590699c" containerID="f3697dd06292e5b31e4ea052ba02d5a2b8edab4162ea0f406daaca353c70bfb8" exitCode=0 Jan 31 04:49:27 crc kubenswrapper[4832]: I0131 04:49:27.402347 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgrt2" event={"ID":"271e0384-a8f3-41b0-a543-28210590699c","Type":"ContainerDied","Data":"f3697dd06292e5b31e4ea052ba02d5a2b8edab4162ea0f406daaca353c70bfb8"} Jan 31 04:49:27 crc kubenswrapper[4832]: I0131 04:49:27.417636 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crxls" event={"ID":"f9f544b6-248c-4f10-8c30-4a976fb6a35c","Type":"ContainerStarted","Data":"043b0933459c8b586ca9237a7037587068f0012dc075a7080af302567eab3c2d"} Jan 31 04:49:27 crc kubenswrapper[4832]: I0131 04:49:27.451200 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cqt2r"] Jan 31 04:49:27 crc kubenswrapper[4832]: I0131 04:49:27.578720 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zk6kb"] Jan 31 04:49:27 crc kubenswrapper[4832]: W0131 04:49:27.581942 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68108ffd_eb09_4ae3_a4d4_0316d20d0feb.slice/crio-4f3fb3d83454a46c63c638a78d391c76e46f16d498612e206b6cac52c49b6462 WatchSource:0}: Error finding container 4f3fb3d83454a46c63c638a78d391c76e46f16d498612e206b6cac52c49b6462: Status 404 returned error can't find the container with id 4f3fb3d83454a46c63c638a78d391c76e46f16d498612e206b6cac52c49b6462 Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.311298 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" podUID="0a2dfeb3-8dde-421d-9e1b-74cb967fb520" containerName="registry" containerID="cri-o://4d6fa1add6e16daed427610129aa24da23194340fdabefd76a31ba4837c2a173" gracePeriod=30 Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.427307 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgrt2" event={"ID":"271e0384-a8f3-41b0-a543-28210590699c","Type":"ContainerStarted","Data":"925bf83fa888044f089c88ea5447ad69019f97dee742934fe98b39eaade31f7a"} Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.431269 4832 generic.go:334] "Generic (PLEG): container finished" podID="f9f544b6-248c-4f10-8c30-4a976fb6a35c" containerID="043b0933459c8b586ca9237a7037587068f0012dc075a7080af302567eab3c2d" exitCode=0 Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.431329 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crxls" event={"ID":"f9f544b6-248c-4f10-8c30-4a976fb6a35c","Type":"ContainerDied","Data":"043b0933459c8b586ca9237a7037587068f0012dc075a7080af302567eab3c2d"} Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.434389 4832 generic.go:334] "Generic (PLEG): container finished" podID="68108ffd-eb09-4ae3-a4d4-0316d20d0feb" containerID="0c95b212b9f052e9c660c949ffde80a4c0c1803a0e28cf7112ca5fcbe9fb1b32" exitCode=0 Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.434451 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk6kb" event={"ID":"68108ffd-eb09-4ae3-a4d4-0316d20d0feb","Type":"ContainerDied","Data":"0c95b212b9f052e9c660c949ffde80a4c0c1803a0e28cf7112ca5fcbe9fb1b32"} Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.434468 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk6kb" event={"ID":"68108ffd-eb09-4ae3-a4d4-0316d20d0feb","Type":"ContainerStarted","Data":"4f3fb3d83454a46c63c638a78d391c76e46f16d498612e206b6cac52c49b6462"} Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.438070 4832 generic.go:334] "Generic (PLEG): container finished" podID="9a66457d-ec6e-439a-894d-a2ce2519bf0c" containerID="f155659a2816df32f174f7c9808ba9c7341e086b52f3110bc4650356ba3c1ff1" exitCode=0 Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.438125 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqt2r" event={"ID":"9a66457d-ec6e-439a-894d-a2ce2519bf0c","Type":"ContainerDied","Data":"f155659a2816df32f174f7c9808ba9c7341e086b52f3110bc4650356ba3c1ff1"} Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.438175 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqt2r" event={"ID":"9a66457d-ec6e-439a-894d-a2ce2519bf0c","Type":"ContainerStarted","Data":"6619842f783728c46a79eb4cc81d60f143b2b40709b4965a348f91f07b79d9f0"} Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.456643 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qgrt2" podStartSLOduration=2.020419777 podStartE2EDuration="4.456621782s" podCreationTimestamp="2026-01-31 04:49:24 +0000 UTC" firstStartedPulling="2026-01-31 04:49:25.379999221 +0000 UTC m=+374.328820906" lastFinishedPulling="2026-01-31 04:49:27.816201226 +0000 UTC m=+376.765022911" observedRunningTime="2026-01-31 04:49:28.452422502 +0000 UTC m=+377.401244187" watchObservedRunningTime="2026-01-31 04:49:28.456621782 +0000 UTC m=+377.405443477" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.724167 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.831198 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-trusted-ca\") pod \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.831390 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.831418 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-ca-trust-extracted\") pod \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.831439 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-installation-pull-secrets\") pod \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.831494 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-bound-sa-token\") pod \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.831549 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-registry-tls\") pod \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.832196 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0a2dfeb3-8dde-421d-9e1b-74cb967fb520" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.832614 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj2p6\" (UniqueName: \"kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-kube-api-access-wj2p6\") pod \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.832673 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-registry-certificates\") pod \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\" (UID: \"0a2dfeb3-8dde-421d-9e1b-74cb967fb520\") " Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.832931 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.833379 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0a2dfeb3-8dde-421d-9e1b-74cb967fb520" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.836702 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-kube-api-access-wj2p6" (OuterVolumeSpecName: "kube-api-access-wj2p6") pod "0a2dfeb3-8dde-421d-9e1b-74cb967fb520" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520"). InnerVolumeSpecName "kube-api-access-wj2p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.846285 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0a2dfeb3-8dde-421d-9e1b-74cb967fb520" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.846436 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0a2dfeb3-8dde-421d-9e1b-74cb967fb520" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.846828 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0a2dfeb3-8dde-421d-9e1b-74cb967fb520" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.854291 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "0a2dfeb3-8dde-421d-9e1b-74cb967fb520" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.864456 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0a2dfeb3-8dde-421d-9e1b-74cb967fb520" (UID: "0a2dfeb3-8dde-421d-9e1b-74cb967fb520"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.934481 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.934522 4832 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.934532 4832 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.934543 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj2p6\" (UniqueName: \"kubernetes.io/projected/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-kube-api-access-wj2p6\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.934552 4832 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:28 crc kubenswrapper[4832]: I0131 04:49:28.934579 4832 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0a2dfeb3-8dde-421d-9e1b-74cb967fb520-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 04:49:29 crc kubenswrapper[4832]: I0131 04:49:29.448239 4832 generic.go:334] "Generic (PLEG): container finished" podID="9a66457d-ec6e-439a-894d-a2ce2519bf0c" containerID="a9c6966c8fdc1b372b0d95e0c6c31d6712aceb9230b004fb1f7e2730e9344325" exitCode=0 Jan 31 04:49:29 crc kubenswrapper[4832]: I0131 04:49:29.448351 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqt2r" event={"ID":"9a66457d-ec6e-439a-894d-a2ce2519bf0c","Type":"ContainerDied","Data":"a9c6966c8fdc1b372b0d95e0c6c31d6712aceb9230b004fb1f7e2730e9344325"} Jan 31 04:49:29 crc kubenswrapper[4832]: I0131 04:49:29.450661 4832 generic.go:334] "Generic (PLEG): container finished" podID="0a2dfeb3-8dde-421d-9e1b-74cb967fb520" containerID="4d6fa1add6e16daed427610129aa24da23194340fdabefd76a31ba4837c2a173" exitCode=0 Jan 31 04:49:29 crc kubenswrapper[4832]: I0131 04:49:29.450702 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" event={"ID":"0a2dfeb3-8dde-421d-9e1b-74cb967fb520","Type":"ContainerDied","Data":"4d6fa1add6e16daed427610129aa24da23194340fdabefd76a31ba4837c2a173"} Jan 31 04:49:29 crc kubenswrapper[4832]: I0131 04:49:29.450779 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" event={"ID":"0a2dfeb3-8dde-421d-9e1b-74cb967fb520","Type":"ContainerDied","Data":"9eb125cb2c2a3ab664b779673eb2ddd758a258c2a4f0ce9210bee4defa6a7e8f"} Jan 31 04:49:29 crc kubenswrapper[4832]: I0131 04:49:29.450733 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bj2zs" Jan 31 04:49:29 crc kubenswrapper[4832]: I0131 04:49:29.450813 4832 scope.go:117] "RemoveContainer" containerID="4d6fa1add6e16daed427610129aa24da23194340fdabefd76a31ba4837c2a173" Jan 31 04:49:29 crc kubenswrapper[4832]: I0131 04:49:29.456791 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crxls" event={"ID":"f9f544b6-248c-4f10-8c30-4a976fb6a35c","Type":"ContainerStarted","Data":"ca5e9fc5a32c264dbe00621194d60d6801ed808cbd1754c98409a95bbe35bdff"} Jan 31 04:49:29 crc kubenswrapper[4832]: I0131 04:49:29.460766 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk6kb" event={"ID":"68108ffd-eb09-4ae3-a4d4-0316d20d0feb","Type":"ContainerStarted","Data":"58dabb6c519e8001d3f1793cab7003b173cca3e7f37b7a3547ff92efa20c1a66"} Jan 31 04:49:29 crc kubenswrapper[4832]: I0131 04:49:29.467074 4832 scope.go:117] "RemoveContainer" containerID="4d6fa1add6e16daed427610129aa24da23194340fdabefd76a31ba4837c2a173" Jan 31 04:49:29 crc kubenswrapper[4832]: E0131 04:49:29.469453 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d6fa1add6e16daed427610129aa24da23194340fdabefd76a31ba4837c2a173\": container with ID starting with 4d6fa1add6e16daed427610129aa24da23194340fdabefd76a31ba4837c2a173 not found: ID does not exist" containerID="4d6fa1add6e16daed427610129aa24da23194340fdabefd76a31ba4837c2a173" Jan 31 04:49:29 crc kubenswrapper[4832]: I0131 04:49:29.469520 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6fa1add6e16daed427610129aa24da23194340fdabefd76a31ba4837c2a173"} err="failed to get container status \"4d6fa1add6e16daed427610129aa24da23194340fdabefd76a31ba4837c2a173\": rpc error: code = NotFound desc = could not find container \"4d6fa1add6e16daed427610129aa24da23194340fdabefd76a31ba4837c2a173\": container with ID starting with 4d6fa1add6e16daed427610129aa24da23194340fdabefd76a31ba4837c2a173 not found: ID does not exist" Jan 31 04:49:29 crc kubenswrapper[4832]: I0131 04:49:29.491243 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-crxls" podStartSLOduration=3.007576792 podStartE2EDuration="5.491218615s" podCreationTimestamp="2026-01-31 04:49:24 +0000 UTC" firstStartedPulling="2026-01-31 04:49:26.394486857 +0000 UTC m=+375.343308542" lastFinishedPulling="2026-01-31 04:49:28.87812868 +0000 UTC m=+377.826950365" observedRunningTime="2026-01-31 04:49:29.48784591 +0000 UTC m=+378.436667615" watchObservedRunningTime="2026-01-31 04:49:29.491218615 +0000 UTC m=+378.440040310" Jan 31 04:49:29 crc kubenswrapper[4832]: I0131 04:49:29.530892 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bj2zs"] Jan 31 04:49:29 crc kubenswrapper[4832]: I0131 04:49:29.538929 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bj2zs"] Jan 31 04:49:29 crc kubenswrapper[4832]: I0131 04:49:29.869203 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a2dfeb3-8dde-421d-9e1b-74cb967fb520" path="/var/lib/kubelet/pods/0a2dfeb3-8dde-421d-9e1b-74cb967fb520/volumes" Jan 31 04:49:30 crc kubenswrapper[4832]: I0131 04:49:30.470704 4832 generic.go:334] "Generic (PLEG): container finished" podID="68108ffd-eb09-4ae3-a4d4-0316d20d0feb" containerID="58dabb6c519e8001d3f1793cab7003b173cca3e7f37b7a3547ff92efa20c1a66" exitCode=0 Jan 31 04:49:30 crc kubenswrapper[4832]: I0131 04:49:30.470823 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk6kb" event={"ID":"68108ffd-eb09-4ae3-a4d4-0316d20d0feb","Type":"ContainerDied","Data":"58dabb6c519e8001d3f1793cab7003b173cca3e7f37b7a3547ff92efa20c1a66"} Jan 31 04:49:30 crc kubenswrapper[4832]: I0131 04:49:30.475344 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cqt2r" event={"ID":"9a66457d-ec6e-439a-894d-a2ce2519bf0c","Type":"ContainerStarted","Data":"30f5bdcb6f72faa6b390b8dbfa2552bda511b1a0688948926f3512d2d4974014"} Jan 31 04:49:30 crc kubenswrapper[4832]: I0131 04:49:30.524236 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cqt2r" podStartSLOduration=3.097711559 podStartE2EDuration="4.524215347s" podCreationTimestamp="2026-01-31 04:49:26 +0000 UTC" firstStartedPulling="2026-01-31 04:49:28.439371905 +0000 UTC m=+377.388193590" lastFinishedPulling="2026-01-31 04:49:29.865875683 +0000 UTC m=+378.814697378" observedRunningTime="2026-01-31 04:49:30.521939877 +0000 UTC m=+379.470761582" watchObservedRunningTime="2026-01-31 04:49:30.524215347 +0000 UTC m=+379.473037032" Jan 31 04:49:31 crc kubenswrapper[4832]: I0131 04:49:31.498016 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk6kb" event={"ID":"68108ffd-eb09-4ae3-a4d4-0316d20d0feb","Type":"ContainerStarted","Data":"b89df03add70d5aef0b60a05612363e775a7ceabd57bbdaae149fbdb3bedc939"} Jan 31 04:49:31 crc kubenswrapper[4832]: I0131 04:49:31.521282 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zk6kb" podStartSLOduration=2.93110939 podStartE2EDuration="5.52125503s" podCreationTimestamp="2026-01-31 04:49:26 +0000 UTC" firstStartedPulling="2026-01-31 04:49:28.435804854 +0000 UTC m=+377.384626539" lastFinishedPulling="2026-01-31 04:49:31.025950494 +0000 UTC m=+379.974772179" observedRunningTime="2026-01-31 04:49:31.516165232 +0000 UTC m=+380.464986927" watchObservedRunningTime="2026-01-31 04:49:31.52125503 +0000 UTC m=+380.470076715" Jan 31 04:49:34 crc kubenswrapper[4832]: I0131 04:49:34.549374 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qgrt2" Jan 31 04:49:34 crc kubenswrapper[4832]: I0131 04:49:34.551325 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qgrt2" Jan 31 04:49:34 crc kubenswrapper[4832]: I0131 04:49:34.605984 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qgrt2" Jan 31 04:49:34 crc kubenswrapper[4832]: I0131 04:49:34.751873 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-crxls" Jan 31 04:49:34 crc kubenswrapper[4832]: I0131 04:49:34.751935 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-crxls" Jan 31 04:49:34 crc kubenswrapper[4832]: I0131 04:49:34.786300 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-crxls" Jan 31 04:49:35 crc kubenswrapper[4832]: I0131 04:49:35.564992 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-crxls" Jan 31 04:49:35 crc kubenswrapper[4832]: I0131 04:49:35.567826 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qgrt2" Jan 31 04:49:36 crc kubenswrapper[4832]: I0131 04:49:36.924966 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cqt2r" Jan 31 04:49:36 crc kubenswrapper[4832]: I0131 04:49:36.925666 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cqt2r" Jan 31 04:49:36 crc kubenswrapper[4832]: I0131 04:49:36.980216 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cqt2r" Jan 31 04:49:37 crc kubenswrapper[4832]: I0131 04:49:37.163882 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zk6kb" Jan 31 04:49:37 crc kubenswrapper[4832]: I0131 04:49:37.164800 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zk6kb" Jan 31 04:49:37 crc kubenswrapper[4832]: I0131 04:49:37.575128 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cqt2r" Jan 31 04:49:38 crc kubenswrapper[4832]: I0131 04:49:38.213132 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zk6kb" podUID="68108ffd-eb09-4ae3-a4d4-0316d20d0feb" containerName="registry-server" probeResult="failure" output=< Jan 31 04:49:38 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 31 04:49:38 crc kubenswrapper[4832]: > Jan 31 04:49:47 crc kubenswrapper[4832]: I0131 04:49:47.201103 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zk6kb" Jan 31 04:49:47 crc kubenswrapper[4832]: I0131 04:49:47.255400 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zk6kb" Jan 31 04:49:48 crc kubenswrapper[4832]: I0131 04:49:48.540997 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:49:48 crc kubenswrapper[4832]: I0131 04:49:48.541395 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:49:48 crc kubenswrapper[4832]: I0131 04:49:48.541477 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:49:48 crc kubenswrapper[4832]: I0131 04:49:48.542643 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb994e6af299c20060c7a17af978747c41f96b8a52a6c65cc669ce278ad24cd9"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:49:48 crc kubenswrapper[4832]: I0131 04:49:48.542779 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://cb994e6af299c20060c7a17af978747c41f96b8a52a6c65cc669ce278ad24cd9" gracePeriod=600 Jan 31 04:49:49 crc kubenswrapper[4832]: I0131 04:49:49.616930 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="cb994e6af299c20060c7a17af978747c41f96b8a52a6c65cc669ce278ad24cd9" exitCode=0 Jan 31 04:49:49 crc kubenswrapper[4832]: I0131 04:49:49.616999 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"cb994e6af299c20060c7a17af978747c41f96b8a52a6c65cc669ce278ad24cd9"} Jan 31 04:49:49 crc kubenswrapper[4832]: I0131 04:49:49.617044 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"d54e6ecf500ec207e0d16ef3d194a42f93e403a3101f3a40a19752b5d12529a1"} Jan 31 04:49:49 crc kubenswrapper[4832]: I0131 04:49:49.617071 4832 scope.go:117] "RemoveContainer" containerID="26818006e28b3733d6f64299da970a0ccf772a1560ef75e0217029cd7c2b7720" Jan 31 04:51:48 crc kubenswrapper[4832]: I0131 04:51:48.540022 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:51:48 crc kubenswrapper[4832]: I0131 04:51:48.542752 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:52:18 crc kubenswrapper[4832]: I0131 04:52:18.540917 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:52:18 crc kubenswrapper[4832]: I0131 04:52:18.542036 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:52:48 crc kubenswrapper[4832]: I0131 04:52:48.540783 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:52:48 crc kubenswrapper[4832]: I0131 04:52:48.541390 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:52:48 crc kubenswrapper[4832]: I0131 04:52:48.541454 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:52:48 crc kubenswrapper[4832]: I0131 04:52:48.542258 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d54e6ecf500ec207e0d16ef3d194a42f93e403a3101f3a40a19752b5d12529a1"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:52:48 crc kubenswrapper[4832]: I0131 04:52:48.542330 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://d54e6ecf500ec207e0d16ef3d194a42f93e403a3101f3a40a19752b5d12529a1" gracePeriod=600 Jan 31 04:52:49 crc kubenswrapper[4832]: I0131 04:52:49.124381 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="d54e6ecf500ec207e0d16ef3d194a42f93e403a3101f3a40a19752b5d12529a1" exitCode=0 Jan 31 04:52:49 crc kubenswrapper[4832]: I0131 04:52:49.124434 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"d54e6ecf500ec207e0d16ef3d194a42f93e403a3101f3a40a19752b5d12529a1"} Jan 31 04:52:49 crc kubenswrapper[4832]: I0131 04:52:49.125164 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"4cfa232b4e7f9afe6aa34948e511fc13a64fb3b1d7193b3fae0b6644206b914b"} Jan 31 04:52:49 crc kubenswrapper[4832]: I0131 04:52:49.125190 4832 scope.go:117] "RemoveContainer" containerID="cb994e6af299c20060c7a17af978747c41f96b8a52a6c65cc669ce278ad24cd9" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.610238 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-465dm"] Jan 31 04:54:19 crc kubenswrapper[4832]: E0131 04:54:19.611829 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2dfeb3-8dde-421d-9e1b-74cb967fb520" containerName="registry" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.611852 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2dfeb3-8dde-421d-9e1b-74cb967fb520" containerName="registry" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.612009 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2dfeb3-8dde-421d-9e1b-74cb967fb520" containerName="registry" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.612720 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-465dm" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.615035 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.616578 4832 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-7wtqx" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.616796 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.621457 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-xqw4t"] Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.622549 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-xqw4t" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.630896 4832 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-94xr7" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.640590 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-465dm"] Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.644945 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-xqw4t"] Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.657746 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-26nzk"] Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.658664 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-26nzk" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.668171 4832 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-7w2st" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.680026 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-26nzk"] Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.782168 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n6zg\" (UniqueName: \"kubernetes.io/projected/0ebb0bad-994a-4c2a-b9d2-21f38ee3939a-kube-api-access-9n6zg\") pod \"cert-manager-webhook-687f57d79b-26nzk\" (UID: \"0ebb0bad-994a-4c2a-b9d2-21f38ee3939a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-26nzk" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.782436 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km4zb\" (UniqueName: \"kubernetes.io/projected/c0e95955-9451-49d7-89f9-daff9bd04f21-kube-api-access-km4zb\") pod \"cert-manager-858654f9db-xqw4t\" (UID: \"c0e95955-9451-49d7-89f9-daff9bd04f21\") " pod="cert-manager/cert-manager-858654f9db-xqw4t" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.782513 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5rsd\" (UniqueName: \"kubernetes.io/projected/82dfe439-0519-43f4-867f-b68944898393-kube-api-access-r5rsd\") pod \"cert-manager-cainjector-cf98fcc89-465dm\" (UID: \"82dfe439-0519-43f4-867f-b68944898393\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-465dm" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.883335 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n6zg\" (UniqueName: \"kubernetes.io/projected/0ebb0bad-994a-4c2a-b9d2-21f38ee3939a-kube-api-access-9n6zg\") pod \"cert-manager-webhook-687f57d79b-26nzk\" (UID: \"0ebb0bad-994a-4c2a-b9d2-21f38ee3939a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-26nzk" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.883406 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km4zb\" (UniqueName: \"kubernetes.io/projected/c0e95955-9451-49d7-89f9-daff9bd04f21-kube-api-access-km4zb\") pod \"cert-manager-858654f9db-xqw4t\" (UID: \"c0e95955-9451-49d7-89f9-daff9bd04f21\") " pod="cert-manager/cert-manager-858654f9db-xqw4t" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.883445 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5rsd\" (UniqueName: \"kubernetes.io/projected/82dfe439-0519-43f4-867f-b68944898393-kube-api-access-r5rsd\") pod \"cert-manager-cainjector-cf98fcc89-465dm\" (UID: \"82dfe439-0519-43f4-867f-b68944898393\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-465dm" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.904644 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5rsd\" (UniqueName: \"kubernetes.io/projected/82dfe439-0519-43f4-867f-b68944898393-kube-api-access-r5rsd\") pod \"cert-manager-cainjector-cf98fcc89-465dm\" (UID: \"82dfe439-0519-43f4-867f-b68944898393\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-465dm" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.906805 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n6zg\" (UniqueName: \"kubernetes.io/projected/0ebb0bad-994a-4c2a-b9d2-21f38ee3939a-kube-api-access-9n6zg\") pod \"cert-manager-webhook-687f57d79b-26nzk\" (UID: \"0ebb0bad-994a-4c2a-b9d2-21f38ee3939a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-26nzk" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.906868 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km4zb\" (UniqueName: \"kubernetes.io/projected/c0e95955-9451-49d7-89f9-daff9bd04f21-kube-api-access-km4zb\") pod \"cert-manager-858654f9db-xqw4t\" (UID: \"c0e95955-9451-49d7-89f9-daff9bd04f21\") " pod="cert-manager/cert-manager-858654f9db-xqw4t" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.943890 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-465dm" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.952412 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-xqw4t" Jan 31 04:54:19 crc kubenswrapper[4832]: I0131 04:54:19.982123 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-26nzk" Jan 31 04:54:20 crc kubenswrapper[4832]: I0131 04:54:20.201017 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-465dm"] Jan 31 04:54:20 crc kubenswrapper[4832]: I0131 04:54:20.217239 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:54:20 crc kubenswrapper[4832]: I0131 04:54:20.443662 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-26nzk"] Jan 31 04:54:20 crc kubenswrapper[4832]: I0131 04:54:20.453107 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-xqw4t"] Jan 31 04:54:20 crc kubenswrapper[4832]: W0131 04:54:20.460019 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ebb0bad_994a_4c2a_b9d2_21f38ee3939a.slice/crio-a628dbf87091a419ac853fed825425c11fa16244a119defb30fd4afad8e845bd WatchSource:0}: Error finding container a628dbf87091a419ac853fed825425c11fa16244a119defb30fd4afad8e845bd: Status 404 returned error can't find the container with id a628dbf87091a419ac853fed825425c11fa16244a119defb30fd4afad8e845bd Jan 31 04:54:20 crc kubenswrapper[4832]: W0131 04:54:20.464870 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0e95955_9451_49d7_89f9_daff9bd04f21.slice/crio-19a99593df6f0e2ade4eb5c552efc369f69e47a8dbde003f88a36d55ad6b694c WatchSource:0}: Error finding container 19a99593df6f0e2ade4eb5c552efc369f69e47a8dbde003f88a36d55ad6b694c: Status 404 returned error can't find the container with id 19a99593df6f0e2ade4eb5c552efc369f69e47a8dbde003f88a36d55ad6b694c Jan 31 04:54:20 crc kubenswrapper[4832]: I0131 04:54:20.774542 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-26nzk" event={"ID":"0ebb0bad-994a-4c2a-b9d2-21f38ee3939a","Type":"ContainerStarted","Data":"a628dbf87091a419ac853fed825425c11fa16244a119defb30fd4afad8e845bd"} Jan 31 04:54:20 crc kubenswrapper[4832]: I0131 04:54:20.776057 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-xqw4t" event={"ID":"c0e95955-9451-49d7-89f9-daff9bd04f21","Type":"ContainerStarted","Data":"19a99593df6f0e2ade4eb5c552efc369f69e47a8dbde003f88a36d55ad6b694c"} Jan 31 04:54:20 crc kubenswrapper[4832]: I0131 04:54:20.777896 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-465dm" event={"ID":"82dfe439-0519-43f4-867f-b68944898393","Type":"ContainerStarted","Data":"74976ce666fffd7408d3101663de91725dd597feb75031222b8ced9f9c2d26db"} Jan 31 04:54:24 crc kubenswrapper[4832]: I0131 04:54:24.799976 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-26nzk" event={"ID":"0ebb0bad-994a-4c2a-b9d2-21f38ee3939a","Type":"ContainerStarted","Data":"2abc12763349d5668147c80919cad1111d826d9072fe223f0ca0dc1d2456fffc"} Jan 31 04:54:24 crc kubenswrapper[4832]: I0131 04:54:24.800827 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-26nzk" Jan 31 04:54:24 crc kubenswrapper[4832]: I0131 04:54:24.802699 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-xqw4t" event={"ID":"c0e95955-9451-49d7-89f9-daff9bd04f21","Type":"ContainerStarted","Data":"1199b42363cf02fd8473d8378cedc4f414fd6cd275392b03cf744efd5d444d4e"} Jan 31 04:54:24 crc kubenswrapper[4832]: I0131 04:54:24.806774 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-465dm" event={"ID":"82dfe439-0519-43f4-867f-b68944898393","Type":"ContainerStarted","Data":"f3d661c44ba0cceb24d028a225c105788fccc3f5eace9a83ffca7067084ef362"} Jan 31 04:54:24 crc kubenswrapper[4832]: I0131 04:54:24.823552 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-26nzk" podStartSLOduration=2.348023559 podStartE2EDuration="5.823535786s" podCreationTimestamp="2026-01-31 04:54:19 +0000 UTC" firstStartedPulling="2026-01-31 04:54:20.464761519 +0000 UTC m=+669.413583224" lastFinishedPulling="2026-01-31 04:54:23.940273766 +0000 UTC m=+672.889095451" observedRunningTime="2026-01-31 04:54:24.820964326 +0000 UTC m=+673.769786011" watchObservedRunningTime="2026-01-31 04:54:24.823535786 +0000 UTC m=+673.772357471" Jan 31 04:54:24 crc kubenswrapper[4832]: I0131 04:54:24.840229 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-465dm" podStartSLOduration=2.122329949 podStartE2EDuration="5.840207713s" podCreationTimestamp="2026-01-31 04:54:19 +0000 UTC" firstStartedPulling="2026-01-31 04:54:20.216973474 +0000 UTC m=+669.165795159" lastFinishedPulling="2026-01-31 04:54:23.934851228 +0000 UTC m=+672.883672923" observedRunningTime="2026-01-31 04:54:24.83721883 +0000 UTC m=+673.786040545" watchObservedRunningTime="2026-01-31 04:54:24.840207713 +0000 UTC m=+673.789029398" Jan 31 04:54:24 crc kubenswrapper[4832]: I0131 04:54:24.854410 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-xqw4t" podStartSLOduration=2.374420467 podStartE2EDuration="5.854386214s" podCreationTimestamp="2026-01-31 04:54:19 +0000 UTC" firstStartedPulling="2026-01-31 04:54:20.469900468 +0000 UTC m=+669.418722153" lastFinishedPulling="2026-01-31 04:54:23.949866215 +0000 UTC m=+672.898687900" observedRunningTime="2026-01-31 04:54:24.853054062 +0000 UTC m=+673.801875747" watchObservedRunningTime="2026-01-31 04:54:24.854386214 +0000 UTC m=+673.803207909" Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.641332 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7gvmz"] Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.642146 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovn-controller" containerID="cri-o://504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a" gracePeriod=30 Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.642192 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="nbdb" containerID="cri-o://e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a" gracePeriod=30 Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.642303 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="northd" containerID="cri-o://4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287" gracePeriod=30 Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.642362 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114" gracePeriod=30 Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.642413 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="kube-rbac-proxy-node" containerID="cri-o://ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2" gracePeriod=30 Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.642461 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovn-acl-logging" containerID="cri-o://05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5" gracePeriod=30 Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.642707 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="sbdb" containerID="cri-o://18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4" gracePeriod=30 Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.683489 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" containerID="cri-o://8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6" gracePeriod=30 Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.842291 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-frk6z_df4dafae-fa72-4f03-8531-93538336b0cd/kube-multus/2.log" Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.842807 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-frk6z_df4dafae-fa72-4f03-8531-93538336b0cd/kube-multus/1.log" Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.842852 4832 generic.go:334] "Generic (PLEG): container finished" podID="df4dafae-fa72-4f03-8531-93538336b0cd" containerID="43aa9277c62f2623e6a18c4d5b2b0b72592088c9d1621dc5dff55fc1304d725f" exitCode=2 Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.842915 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-frk6z" event={"ID":"df4dafae-fa72-4f03-8531-93538336b0cd","Type":"ContainerDied","Data":"43aa9277c62f2623e6a18c4d5b2b0b72592088c9d1621dc5dff55fc1304d725f"} Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.842958 4832 scope.go:117] "RemoveContainer" containerID="2280a590a254679f2240bf2fb2aa06633b56575b436f18ea8be3bd8912598faf" Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.843413 4832 scope.go:117] "RemoveContainer" containerID="43aa9277c62f2623e6a18c4d5b2b0b72592088c9d1621dc5dff55fc1304d725f" Jan 31 04:54:29 crc kubenswrapper[4832]: E0131 04:54:29.843626 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-frk6z_openshift-multus(df4dafae-fa72-4f03-8531-93538336b0cd)\"" pod="openshift-multus/multus-frk6z" podUID="df4dafae-fa72-4f03-8531-93538336b0cd" Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.845591 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovnkube-controller/3.log" Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.847582 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovn-acl-logging/0.log" Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.848144 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovn-controller/0.log" Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.848799 4832 generic.go:334] "Generic (PLEG): container finished" podID="e089fa33-e032-4755-8b7e-262adfecc82f" containerID="8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6" exitCode=0 Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.848827 4832 generic.go:334] "Generic (PLEG): container finished" podID="e089fa33-e032-4755-8b7e-262adfecc82f" containerID="250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114" exitCode=0 Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.848837 4832 generic.go:334] "Generic (PLEG): container finished" podID="e089fa33-e032-4755-8b7e-262adfecc82f" containerID="ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2" exitCode=0 Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.848853 4832 generic.go:334] "Generic (PLEG): container finished" podID="e089fa33-e032-4755-8b7e-262adfecc82f" containerID="05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5" exitCode=143 Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.848875 4832 generic.go:334] "Generic (PLEG): container finished" podID="e089fa33-e032-4755-8b7e-262adfecc82f" containerID="504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a" exitCode=143 Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.848896 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerDied","Data":"8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6"} Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.848927 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerDied","Data":"250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114"} Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.848940 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerDied","Data":"ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2"} Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.848952 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerDied","Data":"05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5"} Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.848964 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerDied","Data":"504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a"} Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.955122 4832 scope.go:117] "RemoveContainer" containerID="789ec5867f23a3ab961a056dfcb9b8de2be9a4f2e27c2906ac20d732f524b296" Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.981762 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovn-acl-logging/0.log" Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.982212 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovn-controller/0.log" Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.982785 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:54:29 crc kubenswrapper[4832]: I0131 04:54:29.985359 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-26nzk" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.041382 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rhfkf"] Jan 31 04:54:30 crc kubenswrapper[4832]: E0131 04:54:30.041699 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.041721 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: E0131 04:54:30.041734 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="northd" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.041741 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="northd" Jan 31 04:54:30 crc kubenswrapper[4832]: E0131 04:54:30.041752 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.041759 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: E0131 04:54:30.041766 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="kube-rbac-proxy-node" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.041773 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="kube-rbac-proxy-node" Jan 31 04:54:30 crc kubenswrapper[4832]: E0131 04:54:30.041785 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="nbdb" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.041792 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="nbdb" Jan 31 04:54:30 crc kubenswrapper[4832]: E0131 04:54:30.041801 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="sbdb" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.041807 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="sbdb" Jan 31 04:54:30 crc kubenswrapper[4832]: E0131 04:54:30.041817 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.041824 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 04:54:30 crc kubenswrapper[4832]: E0131 04:54:30.041837 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="kubecfg-setup" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.041845 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="kubecfg-setup" Jan 31 04:54:30 crc kubenswrapper[4832]: E0131 04:54:30.041855 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovn-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.041861 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovn-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: E0131 04:54:30.041868 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.041874 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: E0131 04:54:30.041883 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovn-acl-logging" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.041889 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovn-acl-logging" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.041991 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.041999 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.042010 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="sbdb" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.042018 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.042025 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovn-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.042033 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovn-acl-logging" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.042040 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="kube-rbac-proxy-node" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.042047 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="northd" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.042055 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.042062 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="nbdb" Jan 31 04:54:30 crc kubenswrapper[4832]: E0131 04:54:30.042157 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.042164 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: E0131 04:54:30.042175 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.042183 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.042276 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.042469 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" containerName="ovnkube-controller" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.044186 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046278 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-cni-bin\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046338 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-node-log\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046411 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-ovn\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046448 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046487 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb97j\" (UniqueName: \"kubernetes.io/projected/e089fa33-e032-4755-8b7e-262adfecc82f-kube-api-access-sb97j\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046517 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-openvswitch\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046549 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-var-lib-openvswitch\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046592 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-kubelet\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046624 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e089fa33-e032-4755-8b7e-262adfecc82f-ovn-node-metrics-cert\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046644 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-systemd-units\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046685 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-ovnkube-script-lib\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046715 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-cni-netd\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046743 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-ovnkube-config\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046783 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-systemd\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046794 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046816 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-run-ovn-kubernetes\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046902 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-run-netns\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046946 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-etc-openvswitch\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.047017 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-env-overrides\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.047043 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-log-socket\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.047094 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-slash\") pod \"e089fa33-e032-4755-8b7e-262adfecc82f\" (UID: \"e089fa33-e032-4755-8b7e-262adfecc82f\") " Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046835 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046879 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.046904 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.047451 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.047481 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.047741 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.047759 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.047774 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.047791 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.047818 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.047822 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-slash" (OuterVolumeSpecName: "host-slash") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.047824 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-log-socket" (OuterVolumeSpecName: "log-socket") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.047840 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-node-log" (OuterVolumeSpecName: "node-log") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.047861 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.047953 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048151 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048679 4832 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048699 4832 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-log-socket\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048708 4832 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-slash\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048716 4832 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048724 4832 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-node-log\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048733 4832 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048743 4832 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048754 4832 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048763 4832 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048772 4832 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048784 4832 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048795 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048804 4832 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048813 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e089fa33-e032-4755-8b7e-262adfecc82f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048822 4832 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048830 4832 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.048838 4832 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.055985 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e089fa33-e032-4755-8b7e-262adfecc82f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.057905 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e089fa33-e032-4755-8b7e-262adfecc82f-kube-api-access-sb97j" (OuterVolumeSpecName: "kube-api-access-sb97j") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "kube-api-access-sb97j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.069243 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "e089fa33-e032-4755-8b7e-262adfecc82f" (UID: "e089fa33-e032-4755-8b7e-262adfecc82f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.149896 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/64408d54-5303-4c1a-941c-d5a2dc16fac5-ovnkube-script-lib\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.149971 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-slash\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150003 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-run-systemd\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150032 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64408d54-5303-4c1a-941c-d5a2dc16fac5-env-overrides\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150069 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-systemd-units\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150140 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-cni-bin\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150171 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64408d54-5303-4c1a-941c-d5a2dc16fac5-ovnkube-config\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150207 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-var-lib-openvswitch\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150240 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-run-netns\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150280 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64408d54-5303-4c1a-941c-d5a2dc16fac5-ovn-node-metrics-cert\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150312 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-node-log\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150361 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-cni-netd\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150398 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-run-ovn-kubernetes\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150445 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-run-openvswitch\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150483 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-run-ovn\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150506 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150598 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzl6m\" (UniqueName: \"kubernetes.io/projected/64408d54-5303-4c1a-941c-d5a2dc16fac5-kube-api-access-lzl6m\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150641 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-etc-openvswitch\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150678 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-kubelet\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150708 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-log-socket\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150774 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb97j\" (UniqueName: \"kubernetes.io/projected/e089fa33-e032-4755-8b7e-262adfecc82f-kube-api-access-sb97j\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150815 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e089fa33-e032-4755-8b7e-262adfecc82f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.150833 4832 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e089fa33-e032-4755-8b7e-262adfecc82f-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.252584 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-systemd-units\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.252655 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-cni-bin\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.252688 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64408d54-5303-4c1a-941c-d5a2dc16fac5-ovnkube-config\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.252719 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-var-lib-openvswitch\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.252753 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-run-netns\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.252795 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64408d54-5303-4c1a-941c-d5a2dc16fac5-ovn-node-metrics-cert\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.252823 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-node-log\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.252820 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-systemd-units\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.252876 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-cni-netd\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.252881 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-var-lib-openvswitch\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.252944 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-cni-netd\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.252972 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-run-ovn-kubernetes\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253020 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-run-netns\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253076 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-run-ovn-kubernetes\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.252836 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-cni-bin\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253093 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-run-openvswitch\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253136 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-run-openvswitch\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253168 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-run-ovn\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253201 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-node-log\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253209 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253247 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253289 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-run-ovn\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253369 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzl6m\" (UniqueName: \"kubernetes.io/projected/64408d54-5303-4c1a-941c-d5a2dc16fac5-kube-api-access-lzl6m\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253421 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-etc-openvswitch\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253474 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-kubelet\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253524 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-log-socket\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253697 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/64408d54-5303-4c1a-941c-d5a2dc16fac5-ovnkube-script-lib\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253765 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64408d54-5303-4c1a-941c-d5a2dc16fac5-env-overrides\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253818 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-slash\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253859 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-run-systemd\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.253987 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-log-socket\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.254031 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-run-systemd\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.254388 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64408d54-5303-4c1a-941c-d5a2dc16fac5-ovnkube-config\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.254611 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-etc-openvswitch\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.254673 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-kubelet\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.254725 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/64408d54-5303-4c1a-941c-d5a2dc16fac5-host-slash\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.255302 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64408d54-5303-4c1a-941c-d5a2dc16fac5-env-overrides\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.255335 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/64408d54-5303-4c1a-941c-d5a2dc16fac5-ovnkube-script-lib\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.259163 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64408d54-5303-4c1a-941c-d5a2dc16fac5-ovn-node-metrics-cert\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.284928 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzl6m\" (UniqueName: \"kubernetes.io/projected/64408d54-5303-4c1a-941c-d5a2dc16fac5-kube-api-access-lzl6m\") pod \"ovnkube-node-rhfkf\" (UID: \"64408d54-5303-4c1a-941c-d5a2dc16fac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.376407 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:30 crc kubenswrapper[4832]: E0131 04:54:30.720696 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64408d54_5303_4c1a_941c_d5a2dc16fac5.slice/crio-conmon-823f9367d5655ee3d59d53cfe7b6e9baa869981aa4d11e0a1c68d3dc544c0c38.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64408d54_5303_4c1a_941c_d5a2dc16fac5.slice/crio-823f9367d5655ee3d59d53cfe7b6e9baa869981aa4d11e0a1c68d3dc544c0c38.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.857756 4832 generic.go:334] "Generic (PLEG): container finished" podID="64408d54-5303-4c1a-941c-d5a2dc16fac5" containerID="823f9367d5655ee3d59d53cfe7b6e9baa869981aa4d11e0a1c68d3dc544c0c38" exitCode=0 Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.857831 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" event={"ID":"64408d54-5303-4c1a-941c-d5a2dc16fac5","Type":"ContainerDied","Data":"823f9367d5655ee3d59d53cfe7b6e9baa869981aa4d11e0a1c68d3dc544c0c38"} Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.858288 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" event={"ID":"64408d54-5303-4c1a-941c-d5a2dc16fac5","Type":"ContainerStarted","Data":"57ef1e252518023913ca19ae45fe1a2b310faf6e4b8fd69e61a2ba1169393521"} Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.861303 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-frk6z_df4dafae-fa72-4f03-8531-93538336b0cd/kube-multus/2.log" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.866985 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovn-acl-logging/0.log" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.867551 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7gvmz_e089fa33-e032-4755-8b7e-262adfecc82f/ovn-controller/0.log" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.867920 4832 generic.go:334] "Generic (PLEG): container finished" podID="e089fa33-e032-4755-8b7e-262adfecc82f" containerID="18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4" exitCode=0 Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.867960 4832 generic.go:334] "Generic (PLEG): container finished" podID="e089fa33-e032-4755-8b7e-262adfecc82f" containerID="e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a" exitCode=0 Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.867969 4832 generic.go:334] "Generic (PLEG): container finished" podID="e089fa33-e032-4755-8b7e-262adfecc82f" containerID="4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287" exitCode=0 Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.868005 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerDied","Data":"18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4"} Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.868043 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerDied","Data":"e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a"} Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.868055 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerDied","Data":"4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287"} Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.868064 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" event={"ID":"e089fa33-e032-4755-8b7e-262adfecc82f","Type":"ContainerDied","Data":"bd634a7c0e4d4e8df72dd5cbea912c13895b43a946a14c4cf92c1eada836ab06"} Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.868077 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7gvmz" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.868089 4832 scope.go:117] "RemoveContainer" containerID="8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.894636 4832 scope.go:117] "RemoveContainer" containerID="18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.911434 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7gvmz"] Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.915735 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7gvmz"] Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.928133 4832 scope.go:117] "RemoveContainer" containerID="e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.948752 4832 scope.go:117] "RemoveContainer" containerID="4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.978070 4832 scope.go:117] "RemoveContainer" containerID="250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114" Jan 31 04:54:30 crc kubenswrapper[4832]: I0131 04:54:30.992817 4832 scope.go:117] "RemoveContainer" containerID="ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.011318 4832 scope.go:117] "RemoveContainer" containerID="05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.044154 4832 scope.go:117] "RemoveContainer" containerID="504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.073425 4832 scope.go:117] "RemoveContainer" containerID="e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.090186 4832 scope.go:117] "RemoveContainer" containerID="8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6" Jan 31 04:54:31 crc kubenswrapper[4832]: E0131 04:54:31.091029 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6\": container with ID starting with 8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6 not found: ID does not exist" containerID="8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.091064 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6"} err="failed to get container status \"8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6\": rpc error: code = NotFound desc = could not find container \"8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6\": container with ID starting with 8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.091086 4832 scope.go:117] "RemoveContainer" containerID="18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4" Jan 31 04:54:31 crc kubenswrapper[4832]: E0131 04:54:31.091407 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\": container with ID starting with 18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4 not found: ID does not exist" containerID="18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.091428 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4"} err="failed to get container status \"18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\": rpc error: code = NotFound desc = could not find container \"18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\": container with ID starting with 18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.091441 4832 scope.go:117] "RemoveContainer" containerID="e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a" Jan 31 04:54:31 crc kubenswrapper[4832]: E0131 04:54:31.091663 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\": container with ID starting with e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a not found: ID does not exist" containerID="e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.091685 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a"} err="failed to get container status \"e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\": rpc error: code = NotFound desc = could not find container \"e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\": container with ID starting with e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.091699 4832 scope.go:117] "RemoveContainer" containerID="4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287" Jan 31 04:54:31 crc kubenswrapper[4832]: E0131 04:54:31.092015 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\": container with ID starting with 4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287 not found: ID does not exist" containerID="4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.092043 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287"} err="failed to get container status \"4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\": rpc error: code = NotFound desc = could not find container \"4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\": container with ID starting with 4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.092062 4832 scope.go:117] "RemoveContainer" containerID="250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114" Jan 31 04:54:31 crc kubenswrapper[4832]: E0131 04:54:31.092314 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\": container with ID starting with 250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114 not found: ID does not exist" containerID="250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.092338 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114"} err="failed to get container status \"250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\": rpc error: code = NotFound desc = could not find container \"250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\": container with ID starting with 250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.092359 4832 scope.go:117] "RemoveContainer" containerID="ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2" Jan 31 04:54:31 crc kubenswrapper[4832]: E0131 04:54:31.092613 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\": container with ID starting with ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2 not found: ID does not exist" containerID="ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.092636 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2"} err="failed to get container status \"ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\": rpc error: code = NotFound desc = could not find container \"ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\": container with ID starting with ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.092650 4832 scope.go:117] "RemoveContainer" containerID="05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5" Jan 31 04:54:31 crc kubenswrapper[4832]: E0131 04:54:31.092834 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\": container with ID starting with 05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5 not found: ID does not exist" containerID="05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.092864 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5"} err="failed to get container status \"05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\": rpc error: code = NotFound desc = could not find container \"05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\": container with ID starting with 05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.092882 4832 scope.go:117] "RemoveContainer" containerID="504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a" Jan 31 04:54:31 crc kubenswrapper[4832]: E0131 04:54:31.093124 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\": container with ID starting with 504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a not found: ID does not exist" containerID="504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.093152 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a"} err="failed to get container status \"504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\": rpc error: code = NotFound desc = could not find container \"504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\": container with ID starting with 504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.093171 4832 scope.go:117] "RemoveContainer" containerID="e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765" Jan 31 04:54:31 crc kubenswrapper[4832]: E0131 04:54:31.093374 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\": container with ID starting with e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765 not found: ID does not exist" containerID="e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.093398 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765"} err="failed to get container status \"e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\": rpc error: code = NotFound desc = could not find container \"e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\": container with ID starting with e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.093413 4832 scope.go:117] "RemoveContainer" containerID="8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.093613 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6"} err="failed to get container status \"8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6\": rpc error: code = NotFound desc = could not find container \"8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6\": container with ID starting with 8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.093636 4832 scope.go:117] "RemoveContainer" containerID="18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.093914 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4"} err="failed to get container status \"18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\": rpc error: code = NotFound desc = could not find container \"18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\": container with ID starting with 18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.093936 4832 scope.go:117] "RemoveContainer" containerID="e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.094108 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a"} err="failed to get container status \"e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\": rpc error: code = NotFound desc = could not find container \"e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\": container with ID starting with e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.094134 4832 scope.go:117] "RemoveContainer" containerID="4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.094333 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287"} err="failed to get container status \"4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\": rpc error: code = NotFound desc = could not find container \"4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\": container with ID starting with 4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.094352 4832 scope.go:117] "RemoveContainer" containerID="250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.094596 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114"} err="failed to get container status \"250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\": rpc error: code = NotFound desc = could not find container \"250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\": container with ID starting with 250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.094614 4832 scope.go:117] "RemoveContainer" containerID="ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.094790 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2"} err="failed to get container status \"ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\": rpc error: code = NotFound desc = could not find container \"ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\": container with ID starting with ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.094818 4832 scope.go:117] "RemoveContainer" containerID="05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.094998 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5"} err="failed to get container status \"05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\": rpc error: code = NotFound desc = could not find container \"05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\": container with ID starting with 05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.095017 4832 scope.go:117] "RemoveContainer" containerID="504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.095189 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a"} err="failed to get container status \"504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\": rpc error: code = NotFound desc = could not find container \"504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\": container with ID starting with 504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.095207 4832 scope.go:117] "RemoveContainer" containerID="e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.095396 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765"} err="failed to get container status \"e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\": rpc error: code = NotFound desc = could not find container \"e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\": container with ID starting with e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.095413 4832 scope.go:117] "RemoveContainer" containerID="8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.095609 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6"} err="failed to get container status \"8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6\": rpc error: code = NotFound desc = could not find container \"8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6\": container with ID starting with 8d8961332be3b6506c0b2ead973075f8d5f481ddac648418b42064823389e7c6 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.095627 4832 scope.go:117] "RemoveContainer" containerID="18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.095939 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4"} err="failed to get container status \"18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\": rpc error: code = NotFound desc = could not find container \"18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4\": container with ID starting with 18908adab758ed98ba9c6064ae308dbd91658ead11925b817411fe42da1a35a4 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.095958 4832 scope.go:117] "RemoveContainer" containerID="e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.096209 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a"} err="failed to get container status \"e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\": rpc error: code = NotFound desc = could not find container \"e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a\": container with ID starting with e931cb74abd917467e08b786f1a7d196721ada5dd99d5393f63816aa9ac9bf7a not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.096227 4832 scope.go:117] "RemoveContainer" containerID="4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.096395 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287"} err="failed to get container status \"4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\": rpc error: code = NotFound desc = could not find container \"4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287\": container with ID starting with 4a2286219e83884c2ae37b193faa7d3fa8617b6efc7d5ded79b51ebe9a1e6287 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.096412 4832 scope.go:117] "RemoveContainer" containerID="250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.096578 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114"} err="failed to get container status \"250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\": rpc error: code = NotFound desc = could not find container \"250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114\": container with ID starting with 250882d8519817a4f79d99963ee3c269a0287e7a8a633ca111e4d274ba9ba114 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.096595 4832 scope.go:117] "RemoveContainer" containerID="ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.096831 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2"} err="failed to get container status \"ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\": rpc error: code = NotFound desc = could not find container \"ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2\": container with ID starting with ed3df04e5fc55e09362fe40191f93b00d06f379bca6fb0dc2f856e5b35df6cf2 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.096851 4832 scope.go:117] "RemoveContainer" containerID="05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.097080 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5"} err="failed to get container status \"05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\": rpc error: code = NotFound desc = could not find container \"05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5\": container with ID starting with 05ffd6ef1b3932dc557549cf3dd91dffc5ba0577950e82b724b0954750c01ce5 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.097096 4832 scope.go:117] "RemoveContainer" containerID="504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.097714 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a"} err="failed to get container status \"504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\": rpc error: code = NotFound desc = could not find container \"504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a\": container with ID starting with 504a0a6628de33e5f4b3b512bd48034c646884c501741cfff2f99fc3f3788d2a not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.097736 4832 scope.go:117] "RemoveContainer" containerID="e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.100580 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765"} err="failed to get container status \"e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\": rpc error: code = NotFound desc = could not find container \"e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765\": container with ID starting with e9833342a69c44a3e963b1cdc654bb27bdcc75794c67359b53576195bfd50765 not found: ID does not exist" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.898417 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e089fa33-e032-4755-8b7e-262adfecc82f" path="/var/lib/kubelet/pods/e089fa33-e032-4755-8b7e-262adfecc82f/volumes" Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.901724 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" event={"ID":"64408d54-5303-4c1a-941c-d5a2dc16fac5","Type":"ContainerStarted","Data":"26b54985dc0eecfb994ef5b12cda81ecf61284311a6ad3f67a551ef7db0c3c8c"} Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.901775 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" event={"ID":"64408d54-5303-4c1a-941c-d5a2dc16fac5","Type":"ContainerStarted","Data":"3424110d96d71ef30f5c9e56ea27742b320bb898b5294a4bbec0b34328be89ca"} Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.901789 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" event={"ID":"64408d54-5303-4c1a-941c-d5a2dc16fac5","Type":"ContainerStarted","Data":"2be13be6811d1692d0a386d321bc3be867f848958e4d883a574685d5ae84950d"} Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.901799 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" event={"ID":"64408d54-5303-4c1a-941c-d5a2dc16fac5","Type":"ContainerStarted","Data":"5a6462b480f928394567de2ca8b4c655f8be257dd9fd92e82259ca12378a08ce"} Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.901810 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" event={"ID":"64408d54-5303-4c1a-941c-d5a2dc16fac5","Type":"ContainerStarted","Data":"29a62363083b835b75401cdd2a0ed4d2eaa004a63a31ef4bbbbd5025a15b8fd9"} Jan 31 04:54:31 crc kubenswrapper[4832]: I0131 04:54:31.901820 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" event={"ID":"64408d54-5303-4c1a-941c-d5a2dc16fac5","Type":"ContainerStarted","Data":"fb23685ac7473217c76ac57faae86707dc4793973c865ebf91a539419b6e0547"} Jan 31 04:54:34 crc kubenswrapper[4832]: I0131 04:54:34.926825 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" event={"ID":"64408d54-5303-4c1a-941c-d5a2dc16fac5","Type":"ContainerStarted","Data":"47d75b843e01dad357762a7a4ac7f85c3ca4cc738577faf35f871cb93644a787"} Jan 31 04:54:36 crc kubenswrapper[4832]: I0131 04:54:36.944948 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" event={"ID":"64408d54-5303-4c1a-941c-d5a2dc16fac5","Type":"ContainerStarted","Data":"c83d9830f0c28e4dce5540b3f07ebf5d569b11b7183de85c0f829c3cd71d2966"} Jan 31 04:54:36 crc kubenswrapper[4832]: I0131 04:54:36.945770 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:36 crc kubenswrapper[4832]: I0131 04:54:36.945789 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:36 crc kubenswrapper[4832]: I0131 04:54:36.983138 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:36 crc kubenswrapper[4832]: I0131 04:54:36.991795 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" podStartSLOduration=6.991762696 podStartE2EDuration="6.991762696s" podCreationTimestamp="2026-01-31 04:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:54:36.985496022 +0000 UTC m=+685.934317707" watchObservedRunningTime="2026-01-31 04:54:36.991762696 +0000 UTC m=+685.940584381" Jan 31 04:54:37 crc kubenswrapper[4832]: I0131 04:54:37.954058 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:37 crc kubenswrapper[4832]: I0131 04:54:37.992620 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:54:44 crc kubenswrapper[4832]: I0131 04:54:44.859911 4832 scope.go:117] "RemoveContainer" containerID="43aa9277c62f2623e6a18c4d5b2b0b72592088c9d1621dc5dff55fc1304d725f" Jan 31 04:54:44 crc kubenswrapper[4832]: E0131 04:54:44.861034 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-frk6z_openshift-multus(df4dafae-fa72-4f03-8531-93538336b0cd)\"" pod="openshift-multus/multus-frk6z" podUID="df4dafae-fa72-4f03-8531-93538336b0cd" Jan 31 04:54:48 crc kubenswrapper[4832]: I0131 04:54:48.540516 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:54:48 crc kubenswrapper[4832]: I0131 04:54:48.541165 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:54:56 crc kubenswrapper[4832]: I0131 04:54:56.859699 4832 scope.go:117] "RemoveContainer" containerID="43aa9277c62f2623e6a18c4d5b2b0b72592088c9d1621dc5dff55fc1304d725f" Jan 31 04:54:57 crc kubenswrapper[4832]: I0131 04:54:57.092797 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-frk6z_df4dafae-fa72-4f03-8531-93538336b0cd/kube-multus/2.log" Jan 31 04:54:57 crc kubenswrapper[4832]: I0131 04:54:57.093137 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-frk6z" event={"ID":"df4dafae-fa72-4f03-8531-93538336b0cd","Type":"ContainerStarted","Data":"e2b0573fbee83d1a9532ceb4aab716b56c6c2c98f600dbb2c3ef895d624467b6"} Jan 31 04:55:00 crc kubenswrapper[4832]: I0131 04:55:00.439937 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rhfkf" Jan 31 04:55:06 crc kubenswrapper[4832]: I0131 04:55:06.363266 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p"] Jan 31 04:55:06 crc kubenswrapper[4832]: I0131 04:55:06.370829 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" Jan 31 04:55:06 crc kubenswrapper[4832]: I0131 04:55:06.378150 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 04:55:06 crc kubenswrapper[4832]: I0131 04:55:06.408594 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p"] Jan 31 04:55:06 crc kubenswrapper[4832]: I0131 04:55:06.519474 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p\" (UID: \"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" Jan 31 04:55:06 crc kubenswrapper[4832]: I0131 04:55:06.519600 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p\" (UID: \"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" Jan 31 04:55:06 crc kubenswrapper[4832]: I0131 04:55:06.519671 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7ckr\" (UniqueName: \"kubernetes.io/projected/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-kube-api-access-q7ckr\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p\" (UID: \"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" Jan 31 04:55:06 crc kubenswrapper[4832]: I0131 04:55:06.620857 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p\" (UID: \"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" Jan 31 04:55:06 crc kubenswrapper[4832]: I0131 04:55:06.620961 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p\" (UID: \"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" Jan 31 04:55:06 crc kubenswrapper[4832]: I0131 04:55:06.621002 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7ckr\" (UniqueName: \"kubernetes.io/projected/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-kube-api-access-q7ckr\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p\" (UID: \"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" Jan 31 04:55:06 crc kubenswrapper[4832]: I0131 04:55:06.621621 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p\" (UID: \"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" Jan 31 04:55:06 crc kubenswrapper[4832]: I0131 04:55:06.621613 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p\" (UID: \"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" Jan 31 04:55:06 crc kubenswrapper[4832]: I0131 04:55:06.652185 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7ckr\" (UniqueName: \"kubernetes.io/projected/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-kube-api-access-q7ckr\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p\" (UID: \"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" Jan 31 04:55:06 crc kubenswrapper[4832]: I0131 04:55:06.696002 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" Jan 31 04:55:07 crc kubenswrapper[4832]: I0131 04:55:07.203737 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p"] Jan 31 04:55:08 crc kubenswrapper[4832]: I0131 04:55:08.169102 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c73b6e9-4228-4e24-bdd7-18f6980c3bc7" containerID="063593f365db3e0a7e4ed81dfa29827fdcdd233a5d381f5a32a9e13b53c4caf0" exitCode=0 Jan 31 04:55:08 crc kubenswrapper[4832]: I0131 04:55:08.169166 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" event={"ID":"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7","Type":"ContainerDied","Data":"063593f365db3e0a7e4ed81dfa29827fdcdd233a5d381f5a32a9e13b53c4caf0"} Jan 31 04:55:08 crc kubenswrapper[4832]: I0131 04:55:08.169213 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" event={"ID":"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7","Type":"ContainerStarted","Data":"dda3f8a0dabd802376a780b1fe4b7efd3f97407c82439629e4d27e36aacb0869"} Jan 31 04:55:10 crc kubenswrapper[4832]: I0131 04:55:10.185998 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c73b6e9-4228-4e24-bdd7-18f6980c3bc7" containerID="1aad02a4e92002000a5c7798c3811f3d5414501d5c1b51afed5dabc6bb57de7e" exitCode=0 Jan 31 04:55:10 crc kubenswrapper[4832]: I0131 04:55:10.186056 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" event={"ID":"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7","Type":"ContainerDied","Data":"1aad02a4e92002000a5c7798c3811f3d5414501d5c1b51afed5dabc6bb57de7e"} Jan 31 04:55:11 crc kubenswrapper[4832]: I0131 04:55:11.204512 4832 generic.go:334] "Generic (PLEG): container finished" podID="8c73b6e9-4228-4e24-bdd7-18f6980c3bc7" containerID="07064d4b125fa1ad767f47624cc140276b13c6ea1d3e6265a9fcf4ab515099a2" exitCode=0 Jan 31 04:55:11 crc kubenswrapper[4832]: I0131 04:55:11.205222 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" event={"ID":"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7","Type":"ContainerDied","Data":"07064d4b125fa1ad767f47624cc140276b13c6ea1d3e6265a9fcf4ab515099a2"} Jan 31 04:55:12 crc kubenswrapper[4832]: I0131 04:55:12.527889 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" Jan 31 04:55:12 crc kubenswrapper[4832]: I0131 04:55:12.622092 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-bundle\") pod \"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7\" (UID: \"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7\") " Jan 31 04:55:12 crc kubenswrapper[4832]: I0131 04:55:12.622176 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7ckr\" (UniqueName: \"kubernetes.io/projected/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-kube-api-access-q7ckr\") pod \"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7\" (UID: \"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7\") " Jan 31 04:55:12 crc kubenswrapper[4832]: I0131 04:55:12.622209 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-util\") pod \"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7\" (UID: \"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7\") " Jan 31 04:55:12 crc kubenswrapper[4832]: I0131 04:55:12.623329 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-bundle" (OuterVolumeSpecName: "bundle") pod "8c73b6e9-4228-4e24-bdd7-18f6980c3bc7" (UID: "8c73b6e9-4228-4e24-bdd7-18f6980c3bc7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:55:12 crc kubenswrapper[4832]: I0131 04:55:12.629832 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-kube-api-access-q7ckr" (OuterVolumeSpecName: "kube-api-access-q7ckr") pod "8c73b6e9-4228-4e24-bdd7-18f6980c3bc7" (UID: "8c73b6e9-4228-4e24-bdd7-18f6980c3bc7"). InnerVolumeSpecName "kube-api-access-q7ckr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:55:12 crc kubenswrapper[4832]: I0131 04:55:12.641101 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-util" (OuterVolumeSpecName: "util") pod "8c73b6e9-4228-4e24-bdd7-18f6980c3bc7" (UID: "8c73b6e9-4228-4e24-bdd7-18f6980c3bc7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:55:12 crc kubenswrapper[4832]: I0131 04:55:12.724181 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:55:12 crc kubenswrapper[4832]: I0131 04:55:12.724245 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7ckr\" (UniqueName: \"kubernetes.io/projected/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-kube-api-access-q7ckr\") on node \"crc\" DevicePath \"\"" Jan 31 04:55:12 crc kubenswrapper[4832]: I0131 04:55:12.724261 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c73b6e9-4228-4e24-bdd7-18f6980c3bc7-util\") on node \"crc\" DevicePath \"\"" Jan 31 04:55:13 crc kubenswrapper[4832]: I0131 04:55:13.227210 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" event={"ID":"8c73b6e9-4228-4e24-bdd7-18f6980c3bc7","Type":"ContainerDied","Data":"dda3f8a0dabd802376a780b1fe4b7efd3f97407c82439629e4d27e36aacb0869"} Jan 31 04:55:13 crc kubenswrapper[4832]: I0131 04:55:13.227295 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dda3f8a0dabd802376a780b1fe4b7efd3f97407c82439629e4d27e36aacb0869" Jan 31 04:55:13 crc kubenswrapper[4832]: I0131 04:55:13.227317 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p" Jan 31 04:55:17 crc kubenswrapper[4832]: I0131 04:55:17.956445 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wgqxf"] Jan 31 04:55:17 crc kubenswrapper[4832]: E0131 04:55:17.956770 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c73b6e9-4228-4e24-bdd7-18f6980c3bc7" containerName="util" Jan 31 04:55:17 crc kubenswrapper[4832]: I0131 04:55:17.956789 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c73b6e9-4228-4e24-bdd7-18f6980c3bc7" containerName="util" Jan 31 04:55:17 crc kubenswrapper[4832]: E0131 04:55:17.956805 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c73b6e9-4228-4e24-bdd7-18f6980c3bc7" containerName="extract" Jan 31 04:55:17 crc kubenswrapper[4832]: I0131 04:55:17.956811 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c73b6e9-4228-4e24-bdd7-18f6980c3bc7" containerName="extract" Jan 31 04:55:17 crc kubenswrapper[4832]: E0131 04:55:17.956828 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c73b6e9-4228-4e24-bdd7-18f6980c3bc7" containerName="pull" Jan 31 04:55:17 crc kubenswrapper[4832]: I0131 04:55:17.956835 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c73b6e9-4228-4e24-bdd7-18f6980c3bc7" containerName="pull" Jan 31 04:55:17 crc kubenswrapper[4832]: I0131 04:55:17.956940 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c73b6e9-4228-4e24-bdd7-18f6980c3bc7" containerName="extract" Jan 31 04:55:17 crc kubenswrapper[4832]: I0131 04:55:17.957338 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wgqxf" Jan 31 04:55:17 crc kubenswrapper[4832]: I0131 04:55:17.958963 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 31 04:55:17 crc kubenswrapper[4832]: I0131 04:55:17.959128 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 31 04:55:17 crc kubenswrapper[4832]: I0131 04:55:17.959270 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-qz8lw" Jan 31 04:55:17 crc kubenswrapper[4832]: I0131 04:55:17.974583 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wgqxf"] Jan 31 04:55:18 crc kubenswrapper[4832]: I0131 04:55:18.101908 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sssd\" (UniqueName: \"kubernetes.io/projected/a73e7d2a-36f0-49e9-82ab-11ede6b1761b-kube-api-access-2sssd\") pod \"nmstate-operator-646758c888-wgqxf\" (UID: \"a73e7d2a-36f0-49e9-82ab-11ede6b1761b\") " pod="openshift-nmstate/nmstate-operator-646758c888-wgqxf" Jan 31 04:55:18 crc kubenswrapper[4832]: I0131 04:55:18.203114 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sssd\" (UniqueName: \"kubernetes.io/projected/a73e7d2a-36f0-49e9-82ab-11ede6b1761b-kube-api-access-2sssd\") pod \"nmstate-operator-646758c888-wgqxf\" (UID: \"a73e7d2a-36f0-49e9-82ab-11ede6b1761b\") " pod="openshift-nmstate/nmstate-operator-646758c888-wgqxf" Jan 31 04:55:18 crc kubenswrapper[4832]: I0131 04:55:18.226791 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sssd\" (UniqueName: \"kubernetes.io/projected/a73e7d2a-36f0-49e9-82ab-11ede6b1761b-kube-api-access-2sssd\") pod \"nmstate-operator-646758c888-wgqxf\" (UID: \"a73e7d2a-36f0-49e9-82ab-11ede6b1761b\") " pod="openshift-nmstate/nmstate-operator-646758c888-wgqxf" Jan 31 04:55:18 crc kubenswrapper[4832]: I0131 04:55:18.294313 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wgqxf" Jan 31 04:55:18 crc kubenswrapper[4832]: I0131 04:55:18.501856 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wgqxf"] Jan 31 04:55:18 crc kubenswrapper[4832]: I0131 04:55:18.542068 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:55:18 crc kubenswrapper[4832]: I0131 04:55:18.542694 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:55:19 crc kubenswrapper[4832]: I0131 04:55:19.270261 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-wgqxf" event={"ID":"a73e7d2a-36f0-49e9-82ab-11ede6b1761b","Type":"ContainerStarted","Data":"8a41da2835ad7dbcc5a72d88f0ae33902df676d6dff6bba5d6f91e0497762cea"} Jan 31 04:55:21 crc kubenswrapper[4832]: I0131 04:55:21.287260 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-wgqxf" event={"ID":"a73e7d2a-36f0-49e9-82ab-11ede6b1761b","Type":"ContainerStarted","Data":"1c2ea7a063812b70334aa0624239b4acee3f4e7620adbeb3fedc11342b1d8e34"} Jan 31 04:55:21 crc kubenswrapper[4832]: I0131 04:55:21.325425 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-wgqxf" podStartSLOduration=2.096355042 podStartE2EDuration="4.325383337s" podCreationTimestamp="2026-01-31 04:55:17 +0000 UTC" firstStartedPulling="2026-01-31 04:55:18.502909152 +0000 UTC m=+727.451730837" lastFinishedPulling="2026-01-31 04:55:20.731937447 +0000 UTC m=+729.680759132" observedRunningTime="2026-01-31 04:55:21.314752176 +0000 UTC m=+730.263573861" watchObservedRunningTime="2026-01-31 04:55:21.325383337 +0000 UTC m=+730.274205062" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.637051 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-m2d42"] Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.640536 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-m2d42" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.644885 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-bzgm2" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.653835 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn"] Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.655099 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.658017 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.691205 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-m2d42"] Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.697303 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-z9vf4"] Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.698266 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.719181 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn"] Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.739253 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l49fz\" (UniqueName: \"kubernetes.io/projected/f14f6771-126c-41a5-9810-7e4ed01aae96-kube-api-access-l49fz\") pod \"nmstate-handler-z9vf4\" (UID: \"f14f6771-126c-41a5-9810-7e4ed01aae96\") " pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.739335 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c78413bf-08b7-4f63-b849-6206713fe6af-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-nwkdn\" (UID: \"c78413bf-08b7-4f63-b849-6206713fe6af\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.739364 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f14f6771-126c-41a5-9810-7e4ed01aae96-nmstate-lock\") pod \"nmstate-handler-z9vf4\" (UID: \"f14f6771-126c-41a5-9810-7e4ed01aae96\") " pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.739399 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szkwk\" (UniqueName: \"kubernetes.io/projected/668e7e0f-218c-48f9-a40b-13f83d5bf7b9-kube-api-access-szkwk\") pod \"nmstate-metrics-54757c584b-m2d42\" (UID: \"668e7e0f-218c-48f9-a40b-13f83d5bf7b9\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-m2d42" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.739448 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcgqk\" (UniqueName: \"kubernetes.io/projected/c78413bf-08b7-4f63-b849-6206713fe6af-kube-api-access-lcgqk\") pod \"nmstate-webhook-8474b5b9d8-nwkdn\" (UID: \"c78413bf-08b7-4f63-b849-6206713fe6af\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.739490 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f14f6771-126c-41a5-9810-7e4ed01aae96-ovs-socket\") pod \"nmstate-handler-z9vf4\" (UID: \"f14f6771-126c-41a5-9810-7e4ed01aae96\") " pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.739532 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f14f6771-126c-41a5-9810-7e4ed01aae96-dbus-socket\") pod \"nmstate-handler-z9vf4\" (UID: \"f14f6771-126c-41a5-9810-7e4ed01aae96\") " pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.823067 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt"] Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.823993 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.826937 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.827345 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.827520 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-vpj6b" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.832420 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt"] Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.841620 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcgqk\" (UniqueName: \"kubernetes.io/projected/c78413bf-08b7-4f63-b849-6206713fe6af-kube-api-access-lcgqk\") pod \"nmstate-webhook-8474b5b9d8-nwkdn\" (UID: \"c78413bf-08b7-4f63-b849-6206713fe6af\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.841670 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f14f6771-126c-41a5-9810-7e4ed01aae96-ovs-socket\") pod \"nmstate-handler-z9vf4\" (UID: \"f14f6771-126c-41a5-9810-7e4ed01aae96\") " pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.841707 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f14f6771-126c-41a5-9810-7e4ed01aae96-dbus-socket\") pod \"nmstate-handler-z9vf4\" (UID: \"f14f6771-126c-41a5-9810-7e4ed01aae96\") " pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.841751 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l49fz\" (UniqueName: \"kubernetes.io/projected/f14f6771-126c-41a5-9810-7e4ed01aae96-kube-api-access-l49fz\") pod \"nmstate-handler-z9vf4\" (UID: \"f14f6771-126c-41a5-9810-7e4ed01aae96\") " pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.841773 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f14f6771-126c-41a5-9810-7e4ed01aae96-nmstate-lock\") pod \"nmstate-handler-z9vf4\" (UID: \"f14f6771-126c-41a5-9810-7e4ed01aae96\") " pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.841788 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c78413bf-08b7-4f63-b849-6206713fe6af-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-nwkdn\" (UID: \"c78413bf-08b7-4f63-b849-6206713fe6af\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.841807 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szkwk\" (UniqueName: \"kubernetes.io/projected/668e7e0f-218c-48f9-a40b-13f83d5bf7b9-kube-api-access-szkwk\") pod \"nmstate-metrics-54757c584b-m2d42\" (UID: \"668e7e0f-218c-48f9-a40b-13f83d5bf7b9\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-m2d42" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.842065 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f14f6771-126c-41a5-9810-7e4ed01aae96-ovs-socket\") pod \"nmstate-handler-z9vf4\" (UID: \"f14f6771-126c-41a5-9810-7e4ed01aae96\") " pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.842171 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f14f6771-126c-41a5-9810-7e4ed01aae96-nmstate-lock\") pod \"nmstate-handler-z9vf4\" (UID: \"f14f6771-126c-41a5-9810-7e4ed01aae96\") " pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.842325 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f14f6771-126c-41a5-9810-7e4ed01aae96-dbus-socket\") pod \"nmstate-handler-z9vf4\" (UID: \"f14f6771-126c-41a5-9810-7e4ed01aae96\") " pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:26 crc kubenswrapper[4832]: E0131 04:55:26.842393 4832 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 31 04:55:26 crc kubenswrapper[4832]: E0131 04:55:26.842436 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c78413bf-08b7-4f63-b849-6206713fe6af-tls-key-pair podName:c78413bf-08b7-4f63-b849-6206713fe6af nodeName:}" failed. No retries permitted until 2026-01-31 04:55:27.342420132 +0000 UTC m=+736.291241817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/c78413bf-08b7-4f63-b849-6206713fe6af-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-nwkdn" (UID: "c78413bf-08b7-4f63-b849-6206713fe6af") : secret "openshift-nmstate-webhook" not found Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.863544 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l49fz\" (UniqueName: \"kubernetes.io/projected/f14f6771-126c-41a5-9810-7e4ed01aae96-kube-api-access-l49fz\") pod \"nmstate-handler-z9vf4\" (UID: \"f14f6771-126c-41a5-9810-7e4ed01aae96\") " pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.864497 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcgqk\" (UniqueName: \"kubernetes.io/projected/c78413bf-08b7-4f63-b849-6206713fe6af-kube-api-access-lcgqk\") pod \"nmstate-webhook-8474b5b9d8-nwkdn\" (UID: \"c78413bf-08b7-4f63-b849-6206713fe6af\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.864762 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szkwk\" (UniqueName: \"kubernetes.io/projected/668e7e0f-218c-48f9-a40b-13f83d5bf7b9-kube-api-access-szkwk\") pod \"nmstate-metrics-54757c584b-m2d42\" (UID: \"668e7e0f-218c-48f9-a40b-13f83d5bf7b9\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-m2d42" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.943876 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6abddfff-a35d-4b7a-aeba-354c6b045b6f-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nl2wt\" (UID: \"6abddfff-a35d-4b7a-aeba-354c6b045b6f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.943949 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6abddfff-a35d-4b7a-aeba-354c6b045b6f-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nl2wt\" (UID: \"6abddfff-a35d-4b7a-aeba-354c6b045b6f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.944001 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dl28\" (UniqueName: \"kubernetes.io/projected/6abddfff-a35d-4b7a-aeba-354c6b045b6f-kube-api-access-2dl28\") pod \"nmstate-console-plugin-7754f76f8b-nl2wt\" (UID: \"6abddfff-a35d-4b7a-aeba-354c6b045b6f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt" Jan 31 04:55:26 crc kubenswrapper[4832]: I0131 04:55:26.962356 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-m2d42" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.014203 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.020635 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-9dbdcf48c-5sc7n"] Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.021364 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.048219 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f8dcd15c-0c21-4481-a85f-31266c6df367-console-config\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.048282 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f8dcd15c-0c21-4481-a85f-31266c6df367-oauth-serving-cert\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.048352 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8dcd15c-0c21-4481-a85f-31266c6df367-console-serving-cert\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.048509 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f8dcd15c-0c21-4481-a85f-31266c6df367-console-oauth-config\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.048598 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f8dcd15c-0c21-4481-a85f-31266c6df367-service-ca\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.048692 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6abddfff-a35d-4b7a-aeba-354c6b045b6f-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nl2wt\" (UID: \"6abddfff-a35d-4b7a-aeba-354c6b045b6f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.048736 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6abddfff-a35d-4b7a-aeba-354c6b045b6f-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nl2wt\" (UID: \"6abddfff-a35d-4b7a-aeba-354c6b045b6f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.048804 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8wvg\" (UniqueName: \"kubernetes.io/projected/f8dcd15c-0c21-4481-a85f-31266c6df367-kube-api-access-r8wvg\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.048860 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8dcd15c-0c21-4481-a85f-31266c6df367-trusted-ca-bundle\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.048914 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dl28\" (UniqueName: \"kubernetes.io/projected/6abddfff-a35d-4b7a-aeba-354c6b045b6f-kube-api-access-2dl28\") pod \"nmstate-console-plugin-7754f76f8b-nl2wt\" (UID: \"6abddfff-a35d-4b7a-aeba-354c6b045b6f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.050843 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6abddfff-a35d-4b7a-aeba-354c6b045b6f-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nl2wt\" (UID: \"6abddfff-a35d-4b7a-aeba-354c6b045b6f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.065890 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6abddfff-a35d-4b7a-aeba-354c6b045b6f-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nl2wt\" (UID: \"6abddfff-a35d-4b7a-aeba-354c6b045b6f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.079488 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dl28\" (UniqueName: \"kubernetes.io/projected/6abddfff-a35d-4b7a-aeba-354c6b045b6f-kube-api-access-2dl28\") pod \"nmstate-console-plugin-7754f76f8b-nl2wt\" (UID: \"6abddfff-a35d-4b7a-aeba-354c6b045b6f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.103208 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-9dbdcf48c-5sc7n"] Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.150134 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8wvg\" (UniqueName: \"kubernetes.io/projected/f8dcd15c-0c21-4481-a85f-31266c6df367-kube-api-access-r8wvg\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.150184 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8dcd15c-0c21-4481-a85f-31266c6df367-trusted-ca-bundle\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.150220 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f8dcd15c-0c21-4481-a85f-31266c6df367-console-config\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.150247 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f8dcd15c-0c21-4481-a85f-31266c6df367-oauth-serving-cert\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.150289 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8dcd15c-0c21-4481-a85f-31266c6df367-console-serving-cert\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.150310 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f8dcd15c-0c21-4481-a85f-31266c6df367-console-oauth-config\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.150331 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f8dcd15c-0c21-4481-a85f-31266c6df367-service-ca\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.152084 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8dcd15c-0c21-4481-a85f-31266c6df367-trusted-ca-bundle\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.152742 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f8dcd15c-0c21-4481-a85f-31266c6df367-oauth-serving-cert\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.153400 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f8dcd15c-0c21-4481-a85f-31266c6df367-service-ca\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.153792 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f8dcd15c-0c21-4481-a85f-31266c6df367-console-config\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.156873 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.159262 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8dcd15c-0c21-4481-a85f-31266c6df367-console-serving-cert\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.160542 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f8dcd15c-0c21-4481-a85f-31266c6df367-console-oauth-config\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.171474 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8wvg\" (UniqueName: \"kubernetes.io/projected/f8dcd15c-0c21-4481-a85f-31266c6df367-kube-api-access-r8wvg\") pod \"console-9dbdcf48c-5sc7n\" (UID: \"f8dcd15c-0c21-4481-a85f-31266c6df367\") " pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.260953 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-m2d42"] Jan 31 04:55:27 crc kubenswrapper[4832]: W0131 04:55:27.269046 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod668e7e0f_218c_48f9_a40b_13f83d5bf7b9.slice/crio-661c71d4657df89e668f248ca1ec44f366e8131ae92e8f96d0694dd76fb36efe WatchSource:0}: Error finding container 661c71d4657df89e668f248ca1ec44f366e8131ae92e8f96d0694dd76fb36efe: Status 404 returned error can't find the container with id 661c71d4657df89e668f248ca1ec44f366e8131ae92e8f96d0694dd76fb36efe Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.335576 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-z9vf4" event={"ID":"f14f6771-126c-41a5-9810-7e4ed01aae96","Type":"ContainerStarted","Data":"a92b8841c17d81207157d8141ab8a74ea314ab55a16f38ef273ea86500743eb1"} Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.337696 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-m2d42" event={"ID":"668e7e0f-218c-48f9-a40b-13f83d5bf7b9","Type":"ContainerStarted","Data":"661c71d4657df89e668f248ca1ec44f366e8131ae92e8f96d0694dd76fb36efe"} Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.358177 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c78413bf-08b7-4f63-b849-6206713fe6af-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-nwkdn\" (UID: \"c78413bf-08b7-4f63-b849-6206713fe6af\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.363929 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c78413bf-08b7-4f63-b849-6206713fe6af-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-nwkdn\" (UID: \"c78413bf-08b7-4f63-b849-6206713fe6af\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.396392 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.402751 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt"] Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.582636 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn" Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.621876 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-9dbdcf48c-5sc7n"] Jan 31 04:55:27 crc kubenswrapper[4832]: W0131 04:55:27.626318 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8dcd15c_0c21_4481_a85f_31266c6df367.slice/crio-93ddc7b2b001fd1146bc3215e3c22fffc82cd3246e0edfab5391e5ee5258e973 WatchSource:0}: Error finding container 93ddc7b2b001fd1146bc3215e3c22fffc82cd3246e0edfab5391e5ee5258e973: Status 404 returned error can't find the container with id 93ddc7b2b001fd1146bc3215e3c22fffc82cd3246e0edfab5391e5ee5258e973 Jan 31 04:55:27 crc kubenswrapper[4832]: I0131 04:55:27.765738 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn"] Jan 31 04:55:28 crc kubenswrapper[4832]: I0131 04:55:28.352267 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9dbdcf48c-5sc7n" event={"ID":"f8dcd15c-0c21-4481-a85f-31266c6df367","Type":"ContainerStarted","Data":"a25981a2a4a6391c676f5976ad1bf9934b58abb85ecb9d5a74e40718485a8955"} Jan 31 04:55:28 crc kubenswrapper[4832]: I0131 04:55:28.352339 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9dbdcf48c-5sc7n" event={"ID":"f8dcd15c-0c21-4481-a85f-31266c6df367","Type":"ContainerStarted","Data":"93ddc7b2b001fd1146bc3215e3c22fffc82cd3246e0edfab5391e5ee5258e973"} Jan 31 04:55:28 crc kubenswrapper[4832]: I0131 04:55:28.354696 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt" event={"ID":"6abddfff-a35d-4b7a-aeba-354c6b045b6f","Type":"ContainerStarted","Data":"f53fe5e1f1b09a3159b0843c16c79f298f6272a8413c3a54a2438fa8947c2363"} Jan 31 04:55:28 crc kubenswrapper[4832]: I0131 04:55:28.356215 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn" event={"ID":"c78413bf-08b7-4f63-b849-6206713fe6af","Type":"ContainerStarted","Data":"f898d1c0a8cdc1534ff0e448d536e8b3d52fbb44f31ce3d5d08b31877b27ed69"} Jan 31 04:55:28 crc kubenswrapper[4832]: I0131 04:55:28.389258 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-9dbdcf48c-5sc7n" podStartSLOduration=1.3892240390000001 podStartE2EDuration="1.389224039s" podCreationTimestamp="2026-01-31 04:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:55:28.384467271 +0000 UTC m=+737.333288996" watchObservedRunningTime="2026-01-31 04:55:28.389224039 +0000 UTC m=+737.338045774" Jan 31 04:55:31 crc kubenswrapper[4832]: I0131 04:55:31.390920 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn" event={"ID":"c78413bf-08b7-4f63-b849-6206713fe6af","Type":"ContainerStarted","Data":"616b2d6893a976ddf1ca0a88fdf95850ef43b9f84accdba9352b77b9094d5daa"} Jan 31 04:55:31 crc kubenswrapper[4832]: I0131 04:55:31.392061 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn" Jan 31 04:55:31 crc kubenswrapper[4832]: I0131 04:55:31.393987 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-m2d42" event={"ID":"668e7e0f-218c-48f9-a40b-13f83d5bf7b9","Type":"ContainerStarted","Data":"192bd324d52c39287eed8042cf3734951c31b4d336bd94d5fc1efcc89caf9e45"} Jan 31 04:55:31 crc kubenswrapper[4832]: I0131 04:55:31.398589 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt" event={"ID":"6abddfff-a35d-4b7a-aeba-354c6b045b6f","Type":"ContainerStarted","Data":"49a47cd4ef8e6960d0977183b9ae5a6525fea5cea87d8d3a8d097af04c217067"} Jan 31 04:55:31 crc kubenswrapper[4832]: I0131 04:55:31.402394 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-z9vf4" event={"ID":"f14f6771-126c-41a5-9810-7e4ed01aae96","Type":"ContainerStarted","Data":"923266e57cdcb6fef3df3fbb540aa33a04d9109e2785bfeb300a27ca7e25fb34"} Jan 31 04:55:31 crc kubenswrapper[4832]: I0131 04:55:31.402669 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:31 crc kubenswrapper[4832]: I0131 04:55:31.427080 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn" podStartSLOduration=2.719584308 podStartE2EDuration="5.427045801s" podCreationTimestamp="2026-01-31 04:55:26 +0000 UTC" firstStartedPulling="2026-01-31 04:55:27.774216799 +0000 UTC m=+736.723038484" lastFinishedPulling="2026-01-31 04:55:30.481678252 +0000 UTC m=+739.430499977" observedRunningTime="2026-01-31 04:55:31.422783329 +0000 UTC m=+740.371605104" watchObservedRunningTime="2026-01-31 04:55:31.427045801 +0000 UTC m=+740.375867526" Jan 31 04:55:31 crc kubenswrapper[4832]: I0131 04:55:31.448425 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nl2wt" podStartSLOduration=2.397331021 podStartE2EDuration="5.448381444s" podCreationTimestamp="2026-01-31 04:55:26 +0000 UTC" firstStartedPulling="2026-01-31 04:55:27.419592026 +0000 UTC m=+736.368413711" lastFinishedPulling="2026-01-31 04:55:30.470642409 +0000 UTC m=+739.419464134" observedRunningTime="2026-01-31 04:55:31.44118591 +0000 UTC m=+740.390007635" watchObservedRunningTime="2026-01-31 04:55:31.448381444 +0000 UTC m=+740.397203169" Jan 31 04:55:31 crc kubenswrapper[4832]: I0131 04:55:31.471181 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-z9vf4" podStartSLOduration=2.071040167 podStartE2EDuration="5.471159621s" podCreationTimestamp="2026-01-31 04:55:26 +0000 UTC" firstStartedPulling="2026-01-31 04:55:27.070520615 +0000 UTC m=+736.019342300" lastFinishedPulling="2026-01-31 04:55:30.470640039 +0000 UTC m=+739.419461754" observedRunningTime="2026-01-31 04:55:31.468339844 +0000 UTC m=+740.417161559" watchObservedRunningTime="2026-01-31 04:55:31.471159621 +0000 UTC m=+740.419981306" Jan 31 04:55:34 crc kubenswrapper[4832]: I0131 04:55:34.433945 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-m2d42" event={"ID":"668e7e0f-218c-48f9-a40b-13f83d5bf7b9","Type":"ContainerStarted","Data":"b8cbac8d97b4cdcb44812239c9b00ed422b65c1277cd81cae6b58886b4beca09"} Jan 31 04:55:37 crc kubenswrapper[4832]: I0131 04:55:37.045053 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-z9vf4" Jan 31 04:55:37 crc kubenswrapper[4832]: I0131 04:55:37.070376 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-m2d42" podStartSLOduration=4.1683073 podStartE2EDuration="11.070346558s" podCreationTimestamp="2026-01-31 04:55:26 +0000 UTC" firstStartedPulling="2026-01-31 04:55:27.271177977 +0000 UTC m=+736.219999662" lastFinishedPulling="2026-01-31 04:55:34.173217215 +0000 UTC m=+743.122038920" observedRunningTime="2026-01-31 04:55:35.480103362 +0000 UTC m=+744.428925057" watchObservedRunningTime="2026-01-31 04:55:37.070346558 +0000 UTC m=+746.019168273" Jan 31 04:55:37 crc kubenswrapper[4832]: I0131 04:55:37.396765 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:37 crc kubenswrapper[4832]: I0131 04:55:37.396852 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:37 crc kubenswrapper[4832]: I0131 04:55:37.404381 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:37 crc kubenswrapper[4832]: I0131 04:55:37.465148 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-9dbdcf48c-5sc7n" Jan 31 04:55:37 crc kubenswrapper[4832]: I0131 04:55:37.535127 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sjkqt"] Jan 31 04:55:47 crc kubenswrapper[4832]: I0131 04:55:47.595858 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nwkdn" Jan 31 04:55:48 crc kubenswrapper[4832]: I0131 04:55:48.539916 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:55:48 crc kubenswrapper[4832]: I0131 04:55:48.540037 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:55:48 crc kubenswrapper[4832]: I0131 04:55:48.540106 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:55:48 crc kubenswrapper[4832]: I0131 04:55:48.540793 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4cfa232b4e7f9afe6aa34948e511fc13a64fb3b1d7193b3fae0b6644206b914b"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:55:48 crc kubenswrapper[4832]: I0131 04:55:48.540882 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://4cfa232b4e7f9afe6aa34948e511fc13a64fb3b1d7193b3fae0b6644206b914b" gracePeriod=600 Jan 31 04:55:49 crc kubenswrapper[4832]: I0131 04:55:49.555708 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"4cfa232b4e7f9afe6aa34948e511fc13a64fb3b1d7193b3fae0b6644206b914b"} Jan 31 04:55:49 crc kubenswrapper[4832]: I0131 04:55:49.556499 4832 scope.go:117] "RemoveContainer" containerID="d54e6ecf500ec207e0d16ef3d194a42f93e403a3101f3a40a19752b5d12529a1" Jan 31 04:55:49 crc kubenswrapper[4832]: I0131 04:55:49.555655 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="4cfa232b4e7f9afe6aa34948e511fc13a64fb3b1d7193b3fae0b6644206b914b" exitCode=0 Jan 31 04:55:49 crc kubenswrapper[4832]: I0131 04:55:49.556602 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"54222fe11bae7b5928dfc35b129ce940cf361d675f70d08f1d3420ed1cc0952b"} Jan 31 04:55:59 crc kubenswrapper[4832]: I0131 04:55:59.766907 4832 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 04:56:02 crc kubenswrapper[4832]: I0131 04:56:02.595685 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-sjkqt" podUID="70780eee-9367-4fce-923e-fc7b8ec0e88a" containerName="console" containerID="cri-o://a7f59be86ce32067a7da0d304c692b88a5d6f5d31d2f18ac5dd9c6a9d7e4f52f" gracePeriod=15 Jan 31 04:56:02 crc kubenswrapper[4832]: I0131 04:56:02.959687 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sjkqt_70780eee-9367-4fce-923e-fc7b8ec0e88a/console/0.log" Jan 31 04:56:02 crc kubenswrapper[4832]: I0131 04:56:02.960080 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.147434 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-trusted-ca-bundle\") pod \"70780eee-9367-4fce-923e-fc7b8ec0e88a\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.147544 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4nx8\" (UniqueName: \"kubernetes.io/projected/70780eee-9367-4fce-923e-fc7b8ec0e88a-kube-api-access-v4nx8\") pod \"70780eee-9367-4fce-923e-fc7b8ec0e88a\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.147603 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-oauth-config\") pod \"70780eee-9367-4fce-923e-fc7b8ec0e88a\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.147664 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-service-ca\") pod \"70780eee-9367-4fce-923e-fc7b8ec0e88a\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.147682 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-oauth-serving-cert\") pod \"70780eee-9367-4fce-923e-fc7b8ec0e88a\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.147698 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-config\") pod \"70780eee-9367-4fce-923e-fc7b8ec0e88a\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.147737 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-serving-cert\") pod \"70780eee-9367-4fce-923e-fc7b8ec0e88a\" (UID: \"70780eee-9367-4fce-923e-fc7b8ec0e88a\") " Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.148404 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "70780eee-9367-4fce-923e-fc7b8ec0e88a" (UID: "70780eee-9367-4fce-923e-fc7b8ec0e88a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.148435 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "70780eee-9367-4fce-923e-fc7b8ec0e88a" (UID: "70780eee-9367-4fce-923e-fc7b8ec0e88a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.148463 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-config" (OuterVolumeSpecName: "console-config") pod "70780eee-9367-4fce-923e-fc7b8ec0e88a" (UID: "70780eee-9367-4fce-923e-fc7b8ec0e88a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.148444 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-service-ca" (OuterVolumeSpecName: "service-ca") pod "70780eee-9367-4fce-923e-fc7b8ec0e88a" (UID: "70780eee-9367-4fce-923e-fc7b8ec0e88a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.156416 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "70780eee-9367-4fce-923e-fc7b8ec0e88a" (UID: "70780eee-9367-4fce-923e-fc7b8ec0e88a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.156819 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70780eee-9367-4fce-923e-fc7b8ec0e88a-kube-api-access-v4nx8" (OuterVolumeSpecName: "kube-api-access-v4nx8") pod "70780eee-9367-4fce-923e-fc7b8ec0e88a" (UID: "70780eee-9367-4fce-923e-fc7b8ec0e88a"). InnerVolumeSpecName "kube-api-access-v4nx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.156855 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "70780eee-9367-4fce-923e-fc7b8ec0e88a" (UID: "70780eee-9367-4fce-923e-fc7b8ec0e88a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.249272 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.249738 4832 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.249909 4832 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.250062 4832 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.250178 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70780eee-9367-4fce-923e-fc7b8ec0e88a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.250289 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4nx8\" (UniqueName: \"kubernetes.io/projected/70780eee-9367-4fce-923e-fc7b8ec0e88a-kube-api-access-v4nx8\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.250401 4832 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/70780eee-9367-4fce-923e-fc7b8ec0e88a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.364894 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk"] Jan 31 04:56:03 crc kubenswrapper[4832]: E0131 04:56:03.365525 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70780eee-9367-4fce-923e-fc7b8ec0e88a" containerName="console" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.365803 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="70780eee-9367-4fce-923e-fc7b8ec0e88a" containerName="console" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.366164 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="70780eee-9367-4fce-923e-fc7b8ec0e88a" containerName="console" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.367721 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.372713 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.372978 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk"] Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.452896 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/265071a1-233b-4945-b4d0-e6f20a6b4ab2-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk\" (UID: \"265071a1-233b-4945-b4d0-e6f20a6b4ab2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.453181 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/265071a1-233b-4945-b4d0-e6f20a6b4ab2-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk\" (UID: \"265071a1-233b-4945-b4d0-e6f20a6b4ab2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.453204 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpc6m\" (UniqueName: \"kubernetes.io/projected/265071a1-233b-4945-b4d0-e6f20a6b4ab2-kube-api-access-hpc6m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk\" (UID: \"265071a1-233b-4945-b4d0-e6f20a6b4ab2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.554279 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/265071a1-233b-4945-b4d0-e6f20a6b4ab2-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk\" (UID: \"265071a1-233b-4945-b4d0-e6f20a6b4ab2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.554685 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpc6m\" (UniqueName: \"kubernetes.io/projected/265071a1-233b-4945-b4d0-e6f20a6b4ab2-kube-api-access-hpc6m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk\" (UID: \"265071a1-233b-4945-b4d0-e6f20a6b4ab2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.554922 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/265071a1-233b-4945-b4d0-e6f20a6b4ab2-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk\" (UID: \"265071a1-233b-4945-b4d0-e6f20a6b4ab2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.555083 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/265071a1-233b-4945-b4d0-e6f20a6b4ab2-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk\" (UID: \"265071a1-233b-4945-b4d0-e6f20a6b4ab2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.555355 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/265071a1-233b-4945-b4d0-e6f20a6b4ab2-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk\" (UID: \"265071a1-233b-4945-b4d0-e6f20a6b4ab2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.573811 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpc6m\" (UniqueName: \"kubernetes.io/projected/265071a1-233b-4945-b4d0-e6f20a6b4ab2-kube-api-access-hpc6m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk\" (UID: \"265071a1-233b-4945-b4d0-e6f20a6b4ab2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.669387 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sjkqt_70780eee-9367-4fce-923e-fc7b8ec0e88a/console/0.log" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.670753 4832 generic.go:334] "Generic (PLEG): container finished" podID="70780eee-9367-4fce-923e-fc7b8ec0e88a" containerID="a7f59be86ce32067a7da0d304c692b88a5d6f5d31d2f18ac5dd9c6a9d7e4f52f" exitCode=2 Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.670844 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sjkqt" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.670834 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sjkqt" event={"ID":"70780eee-9367-4fce-923e-fc7b8ec0e88a","Type":"ContainerDied","Data":"a7f59be86ce32067a7da0d304c692b88a5d6f5d31d2f18ac5dd9c6a9d7e4f52f"} Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.671253 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sjkqt" event={"ID":"70780eee-9367-4fce-923e-fc7b8ec0e88a","Type":"ContainerDied","Data":"55c158667b42fb711dc200e8325a26d8635b76a5ac8667b76ba6e3fc335a3b9f"} Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.671319 4832 scope.go:117] "RemoveContainer" containerID="a7f59be86ce32067a7da0d304c692b88a5d6f5d31d2f18ac5dd9c6a9d7e4f52f" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.690003 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.698636 4832 scope.go:117] "RemoveContainer" containerID="a7f59be86ce32067a7da0d304c692b88a5d6f5d31d2f18ac5dd9c6a9d7e4f52f" Jan 31 04:56:03 crc kubenswrapper[4832]: E0131 04:56:03.699206 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7f59be86ce32067a7da0d304c692b88a5d6f5d31d2f18ac5dd9c6a9d7e4f52f\": container with ID starting with a7f59be86ce32067a7da0d304c692b88a5d6f5d31d2f18ac5dd9c6a9d7e4f52f not found: ID does not exist" containerID="a7f59be86ce32067a7da0d304c692b88a5d6f5d31d2f18ac5dd9c6a9d7e4f52f" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.699243 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7f59be86ce32067a7da0d304c692b88a5d6f5d31d2f18ac5dd9c6a9d7e4f52f"} err="failed to get container status \"a7f59be86ce32067a7da0d304c692b88a5d6f5d31d2f18ac5dd9c6a9d7e4f52f\": rpc error: code = NotFound desc = could not find container \"a7f59be86ce32067a7da0d304c692b88a5d6f5d31d2f18ac5dd9c6a9d7e4f52f\": container with ID starting with a7f59be86ce32067a7da0d304c692b88a5d6f5d31d2f18ac5dd9c6a9d7e4f52f not found: ID does not exist" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.709217 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sjkqt"] Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.713828 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-sjkqt"] Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.871412 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70780eee-9367-4fce-923e-fc7b8ec0e88a" path="/var/lib/kubelet/pods/70780eee-9367-4fce-923e-fc7b8ec0e88a/volumes" Jan 31 04:56:03 crc kubenswrapper[4832]: I0131 04:56:03.965968 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk"] Jan 31 04:56:03 crc kubenswrapper[4832]: W0131 04:56:03.976401 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod265071a1_233b_4945_b4d0_e6f20a6b4ab2.slice/crio-43a9174c9edcee4da4bd2d285f1858cd83d3eac6560f8f9ceecf867e658193b8 WatchSource:0}: Error finding container 43a9174c9edcee4da4bd2d285f1858cd83d3eac6560f8f9ceecf867e658193b8: Status 404 returned error can't find the container with id 43a9174c9edcee4da4bd2d285f1858cd83d3eac6560f8f9ceecf867e658193b8 Jan 31 04:56:04 crc kubenswrapper[4832]: I0131 04:56:04.680164 4832 generic.go:334] "Generic (PLEG): container finished" podID="265071a1-233b-4945-b4d0-e6f20a6b4ab2" containerID="09ecf6f57dacf0fd7167fd426a969f1019faa1faf45ade2dff5d711f184e9b98" exitCode=0 Jan 31 04:56:04 crc kubenswrapper[4832]: I0131 04:56:04.680252 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" event={"ID":"265071a1-233b-4945-b4d0-e6f20a6b4ab2","Type":"ContainerDied","Data":"09ecf6f57dacf0fd7167fd426a969f1019faa1faf45ade2dff5d711f184e9b98"} Jan 31 04:56:04 crc kubenswrapper[4832]: I0131 04:56:04.680621 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" event={"ID":"265071a1-233b-4945-b4d0-e6f20a6b4ab2","Type":"ContainerStarted","Data":"43a9174c9edcee4da4bd2d285f1858cd83d3eac6560f8f9ceecf867e658193b8"} Jan 31 04:56:06 crc kubenswrapper[4832]: I0131 04:56:06.701538 4832 generic.go:334] "Generic (PLEG): container finished" podID="265071a1-233b-4945-b4d0-e6f20a6b4ab2" containerID="1e96cd9fe349fc5628908677b03c848effc83f85c38fea0e9edae2eb42cfe30e" exitCode=0 Jan 31 04:56:06 crc kubenswrapper[4832]: I0131 04:56:06.701727 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" event={"ID":"265071a1-233b-4945-b4d0-e6f20a6b4ab2","Type":"ContainerDied","Data":"1e96cd9fe349fc5628908677b03c848effc83f85c38fea0e9edae2eb42cfe30e"} Jan 31 04:56:06 crc kubenswrapper[4832]: I0131 04:56:06.728346 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hgmrs"] Jan 31 04:56:06 crc kubenswrapper[4832]: I0131 04:56:06.731219 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:06 crc kubenswrapper[4832]: I0131 04:56:06.737975 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hgmrs"] Jan 31 04:56:06 crc kubenswrapper[4832]: I0131 04:56:06.899507 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f95fd2ef-40eb-4eac-9868-487b15a210fc-utilities\") pod \"redhat-operators-hgmrs\" (UID: \"f95fd2ef-40eb-4eac-9868-487b15a210fc\") " pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:06 crc kubenswrapper[4832]: I0131 04:56:06.899571 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv6vh\" (UniqueName: \"kubernetes.io/projected/f95fd2ef-40eb-4eac-9868-487b15a210fc-kube-api-access-mv6vh\") pod \"redhat-operators-hgmrs\" (UID: \"f95fd2ef-40eb-4eac-9868-487b15a210fc\") " pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:06 crc kubenswrapper[4832]: I0131 04:56:06.899591 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f95fd2ef-40eb-4eac-9868-487b15a210fc-catalog-content\") pod \"redhat-operators-hgmrs\" (UID: \"f95fd2ef-40eb-4eac-9868-487b15a210fc\") " pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:07 crc kubenswrapper[4832]: I0131 04:56:07.001188 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f95fd2ef-40eb-4eac-9868-487b15a210fc-utilities\") pod \"redhat-operators-hgmrs\" (UID: \"f95fd2ef-40eb-4eac-9868-487b15a210fc\") " pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:07 crc kubenswrapper[4832]: I0131 04:56:07.001255 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv6vh\" (UniqueName: \"kubernetes.io/projected/f95fd2ef-40eb-4eac-9868-487b15a210fc-kube-api-access-mv6vh\") pod \"redhat-operators-hgmrs\" (UID: \"f95fd2ef-40eb-4eac-9868-487b15a210fc\") " pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:07 crc kubenswrapper[4832]: I0131 04:56:07.001280 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f95fd2ef-40eb-4eac-9868-487b15a210fc-catalog-content\") pod \"redhat-operators-hgmrs\" (UID: \"f95fd2ef-40eb-4eac-9868-487b15a210fc\") " pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:07 crc kubenswrapper[4832]: I0131 04:56:07.001684 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f95fd2ef-40eb-4eac-9868-487b15a210fc-catalog-content\") pod \"redhat-operators-hgmrs\" (UID: \"f95fd2ef-40eb-4eac-9868-487b15a210fc\") " pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:07 crc kubenswrapper[4832]: I0131 04:56:07.001902 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f95fd2ef-40eb-4eac-9868-487b15a210fc-utilities\") pod \"redhat-operators-hgmrs\" (UID: \"f95fd2ef-40eb-4eac-9868-487b15a210fc\") " pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:07 crc kubenswrapper[4832]: I0131 04:56:07.022959 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv6vh\" (UniqueName: \"kubernetes.io/projected/f95fd2ef-40eb-4eac-9868-487b15a210fc-kube-api-access-mv6vh\") pod \"redhat-operators-hgmrs\" (UID: \"f95fd2ef-40eb-4eac-9868-487b15a210fc\") " pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:07 crc kubenswrapper[4832]: I0131 04:56:07.127676 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:07 crc kubenswrapper[4832]: I0131 04:56:07.340798 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hgmrs"] Jan 31 04:56:07 crc kubenswrapper[4832]: I0131 04:56:07.711656 4832 generic.go:334] "Generic (PLEG): container finished" podID="265071a1-233b-4945-b4d0-e6f20a6b4ab2" containerID="3d979255128623ad1249cc6e1da44f8cfe561af08926c20e03d09fe02d4e0722" exitCode=0 Jan 31 04:56:07 crc kubenswrapper[4832]: I0131 04:56:07.711777 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" event={"ID":"265071a1-233b-4945-b4d0-e6f20a6b4ab2","Type":"ContainerDied","Data":"3d979255128623ad1249cc6e1da44f8cfe561af08926c20e03d09fe02d4e0722"} Jan 31 04:56:07 crc kubenswrapper[4832]: I0131 04:56:07.714058 4832 generic.go:334] "Generic (PLEG): container finished" podID="f95fd2ef-40eb-4eac-9868-487b15a210fc" containerID="9c243f6a0b09cc90de1bf5c2df440093f5a81fcd132eb3314ecc83795f4eab4b" exitCode=0 Jan 31 04:56:07 crc kubenswrapper[4832]: I0131 04:56:07.714125 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgmrs" event={"ID":"f95fd2ef-40eb-4eac-9868-487b15a210fc","Type":"ContainerDied","Data":"9c243f6a0b09cc90de1bf5c2df440093f5a81fcd132eb3314ecc83795f4eab4b"} Jan 31 04:56:07 crc kubenswrapper[4832]: I0131 04:56:07.714170 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgmrs" event={"ID":"f95fd2ef-40eb-4eac-9868-487b15a210fc","Type":"ContainerStarted","Data":"961857156f92f18f0b9d3f530e7d3651aa36901f9270c9b73b1f19ffa72818a4"} Jan 31 04:56:08 crc kubenswrapper[4832]: I0131 04:56:08.726224 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgmrs" event={"ID":"f95fd2ef-40eb-4eac-9868-487b15a210fc","Type":"ContainerStarted","Data":"0710c376028eebcae8f4b76f8d89bb5ac49efb7fe0245c1bd48965bedcec4075"} Jan 31 04:56:09 crc kubenswrapper[4832]: I0131 04:56:09.031488 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" Jan 31 04:56:09 crc kubenswrapper[4832]: I0131 04:56:09.128846 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/265071a1-233b-4945-b4d0-e6f20a6b4ab2-bundle\") pod \"265071a1-233b-4945-b4d0-e6f20a6b4ab2\" (UID: \"265071a1-233b-4945-b4d0-e6f20a6b4ab2\") " Jan 31 04:56:09 crc kubenswrapper[4832]: I0131 04:56:09.128992 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpc6m\" (UniqueName: \"kubernetes.io/projected/265071a1-233b-4945-b4d0-e6f20a6b4ab2-kube-api-access-hpc6m\") pod \"265071a1-233b-4945-b4d0-e6f20a6b4ab2\" (UID: \"265071a1-233b-4945-b4d0-e6f20a6b4ab2\") " Jan 31 04:56:09 crc kubenswrapper[4832]: I0131 04:56:09.129143 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/265071a1-233b-4945-b4d0-e6f20a6b4ab2-util\") pod \"265071a1-233b-4945-b4d0-e6f20a6b4ab2\" (UID: \"265071a1-233b-4945-b4d0-e6f20a6b4ab2\") " Jan 31 04:56:09 crc kubenswrapper[4832]: I0131 04:56:09.130038 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/265071a1-233b-4945-b4d0-e6f20a6b4ab2-bundle" (OuterVolumeSpecName: "bundle") pod "265071a1-233b-4945-b4d0-e6f20a6b4ab2" (UID: "265071a1-233b-4945-b4d0-e6f20a6b4ab2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:56:09 crc kubenswrapper[4832]: I0131 04:56:09.135489 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/265071a1-233b-4945-b4d0-e6f20a6b4ab2-kube-api-access-hpc6m" (OuterVolumeSpecName: "kube-api-access-hpc6m") pod "265071a1-233b-4945-b4d0-e6f20a6b4ab2" (UID: "265071a1-233b-4945-b4d0-e6f20a6b4ab2"). InnerVolumeSpecName "kube-api-access-hpc6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:56:09 crc kubenswrapper[4832]: I0131 04:56:09.144098 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/265071a1-233b-4945-b4d0-e6f20a6b4ab2-util" (OuterVolumeSpecName: "util") pod "265071a1-233b-4945-b4d0-e6f20a6b4ab2" (UID: "265071a1-233b-4945-b4d0-e6f20a6b4ab2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:56:09 crc kubenswrapper[4832]: I0131 04:56:09.231474 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/265071a1-233b-4945-b4d0-e6f20a6b4ab2-util\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:09 crc kubenswrapper[4832]: I0131 04:56:09.231513 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/265071a1-233b-4945-b4d0-e6f20a6b4ab2-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:09 crc kubenswrapper[4832]: I0131 04:56:09.231523 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpc6m\" (UniqueName: \"kubernetes.io/projected/265071a1-233b-4945-b4d0-e6f20a6b4ab2-kube-api-access-hpc6m\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:09 crc kubenswrapper[4832]: I0131 04:56:09.737395 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" event={"ID":"265071a1-233b-4945-b4d0-e6f20a6b4ab2","Type":"ContainerDied","Data":"43a9174c9edcee4da4bd2d285f1858cd83d3eac6560f8f9ceecf867e658193b8"} Jan 31 04:56:09 crc kubenswrapper[4832]: I0131 04:56:09.737456 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43a9174c9edcee4da4bd2d285f1858cd83d3eac6560f8f9ceecf867e658193b8" Jan 31 04:56:09 crc kubenswrapper[4832]: I0131 04:56:09.737486 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk" Jan 31 04:56:09 crc kubenswrapper[4832]: I0131 04:56:09.739218 4832 generic.go:334] "Generic (PLEG): container finished" podID="f95fd2ef-40eb-4eac-9868-487b15a210fc" containerID="0710c376028eebcae8f4b76f8d89bb5ac49efb7fe0245c1bd48965bedcec4075" exitCode=0 Jan 31 04:56:09 crc kubenswrapper[4832]: I0131 04:56:09.739275 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgmrs" event={"ID":"f95fd2ef-40eb-4eac-9868-487b15a210fc","Type":"ContainerDied","Data":"0710c376028eebcae8f4b76f8d89bb5ac49efb7fe0245c1bd48965bedcec4075"} Jan 31 04:56:10 crc kubenswrapper[4832]: I0131 04:56:10.749351 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgmrs" event={"ID":"f95fd2ef-40eb-4eac-9868-487b15a210fc","Type":"ContainerStarted","Data":"9118a39ca04b351a8ef299d4139a383692d83ba3becae60477f9bf1ebdb91b1f"} Jan 31 04:56:10 crc kubenswrapper[4832]: I0131 04:56:10.778377 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hgmrs" podStartSLOduration=2.333823769 podStartE2EDuration="4.778358056s" podCreationTimestamp="2026-01-31 04:56:06 +0000 UTC" firstStartedPulling="2026-01-31 04:56:07.715670562 +0000 UTC m=+776.664492247" lastFinishedPulling="2026-01-31 04:56:10.160204839 +0000 UTC m=+779.109026534" observedRunningTime="2026-01-31 04:56:10.775301361 +0000 UTC m=+779.724123046" watchObservedRunningTime="2026-01-31 04:56:10.778358056 +0000 UTC m=+779.727179741" Jan 31 04:56:17 crc kubenswrapper[4832]: I0131 04:56:17.128455 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:17 crc kubenswrapper[4832]: I0131 04:56:17.129040 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:17 crc kubenswrapper[4832]: I0131 04:56:17.183434 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:17 crc kubenswrapper[4832]: I0131 04:56:17.838014 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.077763 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7"] Jan 31 04:56:18 crc kubenswrapper[4832]: E0131 04:56:18.078282 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265071a1-233b-4945-b4d0-e6f20a6b4ab2" containerName="extract" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.078294 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="265071a1-233b-4945-b4d0-e6f20a6b4ab2" containerName="extract" Jan 31 04:56:18 crc kubenswrapper[4832]: E0131 04:56:18.078315 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265071a1-233b-4945-b4d0-e6f20a6b4ab2" containerName="pull" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.078322 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="265071a1-233b-4945-b4d0-e6f20a6b4ab2" containerName="pull" Jan 31 04:56:18 crc kubenswrapper[4832]: E0131 04:56:18.078334 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265071a1-233b-4945-b4d0-e6f20a6b4ab2" containerName="util" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.078339 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="265071a1-233b-4945-b4d0-e6f20a6b4ab2" containerName="util" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.078432 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="265071a1-233b-4945-b4d0-e6f20a6b4ab2" containerName="extract" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.078844 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.082963 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.084170 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-f5rtf" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.084247 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.084325 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.085665 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.103098 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7"] Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.255965 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c5rt\" (UniqueName: \"kubernetes.io/projected/873c6fd7-9f23-4376-96f6-3e8a19b56593-kube-api-access-6c5rt\") pod \"metallb-operator-controller-manager-8567bf5564-5cjq7\" (UID: \"873c6fd7-9f23-4376-96f6-3e8a19b56593\") " pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.256124 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/873c6fd7-9f23-4376-96f6-3e8a19b56593-webhook-cert\") pod \"metallb-operator-controller-manager-8567bf5564-5cjq7\" (UID: \"873c6fd7-9f23-4376-96f6-3e8a19b56593\") " pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.256209 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/873c6fd7-9f23-4376-96f6-3e8a19b56593-apiservice-cert\") pod \"metallb-operator-controller-manager-8567bf5564-5cjq7\" (UID: \"873c6fd7-9f23-4376-96f6-3e8a19b56593\") " pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.357663 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c5rt\" (UniqueName: \"kubernetes.io/projected/873c6fd7-9f23-4376-96f6-3e8a19b56593-kube-api-access-6c5rt\") pod \"metallb-operator-controller-manager-8567bf5564-5cjq7\" (UID: \"873c6fd7-9f23-4376-96f6-3e8a19b56593\") " pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.357730 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/873c6fd7-9f23-4376-96f6-3e8a19b56593-webhook-cert\") pod \"metallb-operator-controller-manager-8567bf5564-5cjq7\" (UID: \"873c6fd7-9f23-4376-96f6-3e8a19b56593\") " pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.357762 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/873c6fd7-9f23-4376-96f6-3e8a19b56593-apiservice-cert\") pod \"metallb-operator-controller-manager-8567bf5564-5cjq7\" (UID: \"873c6fd7-9f23-4376-96f6-3e8a19b56593\") " pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.365137 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/873c6fd7-9f23-4376-96f6-3e8a19b56593-webhook-cert\") pod \"metallb-operator-controller-manager-8567bf5564-5cjq7\" (UID: \"873c6fd7-9f23-4376-96f6-3e8a19b56593\") " pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.382391 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/873c6fd7-9f23-4376-96f6-3e8a19b56593-apiservice-cert\") pod \"metallb-operator-controller-manager-8567bf5564-5cjq7\" (UID: \"873c6fd7-9f23-4376-96f6-3e8a19b56593\") " pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.382467 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c5rt\" (UniqueName: \"kubernetes.io/projected/873c6fd7-9f23-4376-96f6-3e8a19b56593-kube-api-access-6c5rt\") pod \"metallb-operator-controller-manager-8567bf5564-5cjq7\" (UID: \"873c6fd7-9f23-4376-96f6-3e8a19b56593\") " pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.394423 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.434516 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2"] Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.435375 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.440720 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.440999 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.441171 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-76qq4" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.454282 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2"] Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.577143 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj8zb\" (UniqueName: \"kubernetes.io/projected/01a0d1ce-012e-4200-ac92-995c0f1a2d1c-kube-api-access-rj8zb\") pod \"metallb-operator-webhook-server-69fbdc97fc-wxfb2\" (UID: \"01a0d1ce-012e-4200-ac92-995c0f1a2d1c\") " pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.577749 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01a0d1ce-012e-4200-ac92-995c0f1a2d1c-webhook-cert\") pod \"metallb-operator-webhook-server-69fbdc97fc-wxfb2\" (UID: \"01a0d1ce-012e-4200-ac92-995c0f1a2d1c\") " pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.579237 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01a0d1ce-012e-4200-ac92-995c0f1a2d1c-apiservice-cert\") pod \"metallb-operator-webhook-server-69fbdc97fc-wxfb2\" (UID: \"01a0d1ce-012e-4200-ac92-995c0f1a2d1c\") " pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.680103 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01a0d1ce-012e-4200-ac92-995c0f1a2d1c-webhook-cert\") pod \"metallb-operator-webhook-server-69fbdc97fc-wxfb2\" (UID: \"01a0d1ce-012e-4200-ac92-995c0f1a2d1c\") " pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.680161 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01a0d1ce-012e-4200-ac92-995c0f1a2d1c-apiservice-cert\") pod \"metallb-operator-webhook-server-69fbdc97fc-wxfb2\" (UID: \"01a0d1ce-012e-4200-ac92-995c0f1a2d1c\") " pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.680191 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj8zb\" (UniqueName: \"kubernetes.io/projected/01a0d1ce-012e-4200-ac92-995c0f1a2d1c-kube-api-access-rj8zb\") pod \"metallb-operator-webhook-server-69fbdc97fc-wxfb2\" (UID: \"01a0d1ce-012e-4200-ac92-995c0f1a2d1c\") " pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.688439 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01a0d1ce-012e-4200-ac92-995c0f1a2d1c-apiservice-cert\") pod \"metallb-operator-webhook-server-69fbdc97fc-wxfb2\" (UID: \"01a0d1ce-012e-4200-ac92-995c0f1a2d1c\") " pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.689493 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01a0d1ce-012e-4200-ac92-995c0f1a2d1c-webhook-cert\") pod \"metallb-operator-webhook-server-69fbdc97fc-wxfb2\" (UID: \"01a0d1ce-012e-4200-ac92-995c0f1a2d1c\") " pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.692450 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7"] Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.705343 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj8zb\" (UniqueName: \"kubernetes.io/projected/01a0d1ce-012e-4200-ac92-995c0f1a2d1c-kube-api-access-rj8zb\") pod \"metallb-operator-webhook-server-69fbdc97fc-wxfb2\" (UID: \"01a0d1ce-012e-4200-ac92-995c0f1a2d1c\") " pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.784514 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" Jan 31 04:56:18 crc kubenswrapper[4832]: I0131 04:56:18.799424 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" event={"ID":"873c6fd7-9f23-4376-96f6-3e8a19b56593","Type":"ContainerStarted","Data":"bd1332f157136f004ec49a230dd09304a1cb177413f975faf6b864471ba15208"} Jan 31 04:56:19 crc kubenswrapper[4832]: I0131 04:56:19.059983 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2"] Jan 31 04:56:19 crc kubenswrapper[4832]: W0131 04:56:19.073609 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01a0d1ce_012e_4200_ac92_995c0f1a2d1c.slice/crio-56d0e54e8b01faf6dc0802b235f81a4e1f21ae55e71ccb6b2e9647fb06f90ac3 WatchSource:0}: Error finding container 56d0e54e8b01faf6dc0802b235f81a4e1f21ae55e71ccb6b2e9647fb06f90ac3: Status 404 returned error can't find the container with id 56d0e54e8b01faf6dc0802b235f81a4e1f21ae55e71ccb6b2e9647fb06f90ac3 Jan 31 04:56:19 crc kubenswrapper[4832]: I0131 04:56:19.699736 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hgmrs"] Jan 31 04:56:19 crc kubenswrapper[4832]: I0131 04:56:19.809346 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hgmrs" podUID="f95fd2ef-40eb-4eac-9868-487b15a210fc" containerName="registry-server" containerID="cri-o://9118a39ca04b351a8ef299d4139a383692d83ba3becae60477f9bf1ebdb91b1f" gracePeriod=2 Jan 31 04:56:19 crc kubenswrapper[4832]: I0131 04:56:19.810965 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" event={"ID":"01a0d1ce-012e-4200-ac92-995c0f1a2d1c","Type":"ContainerStarted","Data":"56d0e54e8b01faf6dc0802b235f81a4e1f21ae55e71ccb6b2e9647fb06f90ac3"} Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.211966 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.402575 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f95fd2ef-40eb-4eac-9868-487b15a210fc-utilities\") pod \"f95fd2ef-40eb-4eac-9868-487b15a210fc\" (UID: \"f95fd2ef-40eb-4eac-9868-487b15a210fc\") " Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.403258 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f95fd2ef-40eb-4eac-9868-487b15a210fc-catalog-content\") pod \"f95fd2ef-40eb-4eac-9868-487b15a210fc\" (UID: \"f95fd2ef-40eb-4eac-9868-487b15a210fc\") " Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.403379 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv6vh\" (UniqueName: \"kubernetes.io/projected/f95fd2ef-40eb-4eac-9868-487b15a210fc-kube-api-access-mv6vh\") pod \"f95fd2ef-40eb-4eac-9868-487b15a210fc\" (UID: \"f95fd2ef-40eb-4eac-9868-487b15a210fc\") " Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.405765 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f95fd2ef-40eb-4eac-9868-487b15a210fc-utilities" (OuterVolumeSpecName: "utilities") pod "f95fd2ef-40eb-4eac-9868-487b15a210fc" (UID: "f95fd2ef-40eb-4eac-9868-487b15a210fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.412793 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f95fd2ef-40eb-4eac-9868-487b15a210fc-kube-api-access-mv6vh" (OuterVolumeSpecName: "kube-api-access-mv6vh") pod "f95fd2ef-40eb-4eac-9868-487b15a210fc" (UID: "f95fd2ef-40eb-4eac-9868-487b15a210fc"). InnerVolumeSpecName "kube-api-access-mv6vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.504745 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f95fd2ef-40eb-4eac-9868-487b15a210fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.504780 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv6vh\" (UniqueName: \"kubernetes.io/projected/f95fd2ef-40eb-4eac-9868-487b15a210fc-kube-api-access-mv6vh\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.537872 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f95fd2ef-40eb-4eac-9868-487b15a210fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f95fd2ef-40eb-4eac-9868-487b15a210fc" (UID: "f95fd2ef-40eb-4eac-9868-487b15a210fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.605867 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f95fd2ef-40eb-4eac-9868-487b15a210fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.834065 4832 generic.go:334] "Generic (PLEG): container finished" podID="f95fd2ef-40eb-4eac-9868-487b15a210fc" containerID="9118a39ca04b351a8ef299d4139a383692d83ba3becae60477f9bf1ebdb91b1f" exitCode=0 Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.834148 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgmrs" event={"ID":"f95fd2ef-40eb-4eac-9868-487b15a210fc","Type":"ContainerDied","Data":"9118a39ca04b351a8ef299d4139a383692d83ba3becae60477f9bf1ebdb91b1f"} Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.834198 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgmrs" event={"ID":"f95fd2ef-40eb-4eac-9868-487b15a210fc","Type":"ContainerDied","Data":"961857156f92f18f0b9d3f530e7d3651aa36901f9270c9b73b1f19ffa72818a4"} Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.834219 4832 scope.go:117] "RemoveContainer" containerID="9118a39ca04b351a8ef299d4139a383692d83ba3becae60477f9bf1ebdb91b1f" Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.834355 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgmrs" Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.871144 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hgmrs"] Jan 31 04:56:20 crc kubenswrapper[4832]: I0131 04:56:20.874869 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hgmrs"] Jan 31 04:56:21 crc kubenswrapper[4832]: I0131 04:56:21.544240 4832 scope.go:117] "RemoveContainer" containerID="0710c376028eebcae8f4b76f8d89bb5ac49efb7fe0245c1bd48965bedcec4075" Jan 31 04:56:21 crc kubenswrapper[4832]: I0131 04:56:21.603380 4832 scope.go:117] "RemoveContainer" containerID="9c243f6a0b09cc90de1bf5c2df440093f5a81fcd132eb3314ecc83795f4eab4b" Jan 31 04:56:21 crc kubenswrapper[4832]: I0131 04:56:21.627975 4832 scope.go:117] "RemoveContainer" containerID="9118a39ca04b351a8ef299d4139a383692d83ba3becae60477f9bf1ebdb91b1f" Jan 31 04:56:21 crc kubenswrapper[4832]: E0131 04:56:21.628803 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9118a39ca04b351a8ef299d4139a383692d83ba3becae60477f9bf1ebdb91b1f\": container with ID starting with 9118a39ca04b351a8ef299d4139a383692d83ba3becae60477f9bf1ebdb91b1f not found: ID does not exist" containerID="9118a39ca04b351a8ef299d4139a383692d83ba3becae60477f9bf1ebdb91b1f" Jan 31 04:56:21 crc kubenswrapper[4832]: I0131 04:56:21.628837 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9118a39ca04b351a8ef299d4139a383692d83ba3becae60477f9bf1ebdb91b1f"} err="failed to get container status \"9118a39ca04b351a8ef299d4139a383692d83ba3becae60477f9bf1ebdb91b1f\": rpc error: code = NotFound desc = could not find container \"9118a39ca04b351a8ef299d4139a383692d83ba3becae60477f9bf1ebdb91b1f\": container with ID starting with 9118a39ca04b351a8ef299d4139a383692d83ba3becae60477f9bf1ebdb91b1f not found: ID does not exist" Jan 31 04:56:21 crc kubenswrapper[4832]: I0131 04:56:21.628861 4832 scope.go:117] "RemoveContainer" containerID="0710c376028eebcae8f4b76f8d89bb5ac49efb7fe0245c1bd48965bedcec4075" Jan 31 04:56:21 crc kubenswrapper[4832]: E0131 04:56:21.629310 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0710c376028eebcae8f4b76f8d89bb5ac49efb7fe0245c1bd48965bedcec4075\": container with ID starting with 0710c376028eebcae8f4b76f8d89bb5ac49efb7fe0245c1bd48965bedcec4075 not found: ID does not exist" containerID="0710c376028eebcae8f4b76f8d89bb5ac49efb7fe0245c1bd48965bedcec4075" Jan 31 04:56:21 crc kubenswrapper[4832]: I0131 04:56:21.629371 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0710c376028eebcae8f4b76f8d89bb5ac49efb7fe0245c1bd48965bedcec4075"} err="failed to get container status \"0710c376028eebcae8f4b76f8d89bb5ac49efb7fe0245c1bd48965bedcec4075\": rpc error: code = NotFound desc = could not find container \"0710c376028eebcae8f4b76f8d89bb5ac49efb7fe0245c1bd48965bedcec4075\": container with ID starting with 0710c376028eebcae8f4b76f8d89bb5ac49efb7fe0245c1bd48965bedcec4075 not found: ID does not exist" Jan 31 04:56:21 crc kubenswrapper[4832]: I0131 04:56:21.629413 4832 scope.go:117] "RemoveContainer" containerID="9c243f6a0b09cc90de1bf5c2df440093f5a81fcd132eb3314ecc83795f4eab4b" Jan 31 04:56:21 crc kubenswrapper[4832]: E0131 04:56:21.629712 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c243f6a0b09cc90de1bf5c2df440093f5a81fcd132eb3314ecc83795f4eab4b\": container with ID starting with 9c243f6a0b09cc90de1bf5c2df440093f5a81fcd132eb3314ecc83795f4eab4b not found: ID does not exist" containerID="9c243f6a0b09cc90de1bf5c2df440093f5a81fcd132eb3314ecc83795f4eab4b" Jan 31 04:56:21 crc kubenswrapper[4832]: I0131 04:56:21.629740 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c243f6a0b09cc90de1bf5c2df440093f5a81fcd132eb3314ecc83795f4eab4b"} err="failed to get container status \"9c243f6a0b09cc90de1bf5c2df440093f5a81fcd132eb3314ecc83795f4eab4b\": rpc error: code = NotFound desc = could not find container \"9c243f6a0b09cc90de1bf5c2df440093f5a81fcd132eb3314ecc83795f4eab4b\": container with ID starting with 9c243f6a0b09cc90de1bf5c2df440093f5a81fcd132eb3314ecc83795f4eab4b not found: ID does not exist" Jan 31 04:56:21 crc kubenswrapper[4832]: I0131 04:56:21.857591 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" event={"ID":"873c6fd7-9f23-4376-96f6-3e8a19b56593","Type":"ContainerStarted","Data":"dbc0e33ea1a7a05962cd59cf5c7c378586c8de93312d0794d8ba421ff23a9e43"} Jan 31 04:56:21 crc kubenswrapper[4832]: I0131 04:56:21.857858 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" Jan 31 04:56:21 crc kubenswrapper[4832]: I0131 04:56:21.871776 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f95fd2ef-40eb-4eac-9868-487b15a210fc" path="/var/lib/kubelet/pods/f95fd2ef-40eb-4eac-9868-487b15a210fc/volumes" Jan 31 04:56:21 crc kubenswrapper[4832]: I0131 04:56:21.885474 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" podStartSLOduration=0.98336516 podStartE2EDuration="3.885449136s" podCreationTimestamp="2026-01-31 04:56:18 +0000 UTC" firstStartedPulling="2026-01-31 04:56:18.704824379 +0000 UTC m=+787.653646064" lastFinishedPulling="2026-01-31 04:56:21.606908355 +0000 UTC m=+790.555730040" observedRunningTime="2026-01-31 04:56:21.881392219 +0000 UTC m=+790.830213904" watchObservedRunningTime="2026-01-31 04:56:21.885449136 +0000 UTC m=+790.834270821" Jan 31 04:56:23 crc kubenswrapper[4832]: I0131 04:56:23.874774 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" event={"ID":"01a0d1ce-012e-4200-ac92-995c0f1a2d1c","Type":"ContainerStarted","Data":"bac9e312839f8a65618fa25683a977f00b5a47c721cf0fe9f04109090095aa99"} Jan 31 04:56:23 crc kubenswrapper[4832]: I0131 04:56:23.875882 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" Jan 31 04:56:23 crc kubenswrapper[4832]: I0131 04:56:23.908815 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" podStartSLOduration=1.4823393550000001 podStartE2EDuration="5.908793732s" podCreationTimestamp="2026-01-31 04:56:18 +0000 UTC" firstStartedPulling="2026-01-31 04:56:19.076591094 +0000 UTC m=+788.025412779" lastFinishedPulling="2026-01-31 04:56:23.503045471 +0000 UTC m=+792.451867156" observedRunningTime="2026-01-31 04:56:23.903475517 +0000 UTC m=+792.852297222" watchObservedRunningTime="2026-01-31 04:56:23.908793732 +0000 UTC m=+792.857615427" Jan 31 04:56:38 crc kubenswrapper[4832]: I0131 04:56:38.794915 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-69fbdc97fc-wxfb2" Jan 31 04:56:39 crc kubenswrapper[4832]: I0131 04:56:39.974809 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6rl7r"] Jan 31 04:56:39 crc kubenswrapper[4832]: E0131 04:56:39.975330 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f95fd2ef-40eb-4eac-9868-487b15a210fc" containerName="extract-content" Jan 31 04:56:39 crc kubenswrapper[4832]: I0131 04:56:39.975342 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f95fd2ef-40eb-4eac-9868-487b15a210fc" containerName="extract-content" Jan 31 04:56:39 crc kubenswrapper[4832]: E0131 04:56:39.975356 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f95fd2ef-40eb-4eac-9868-487b15a210fc" containerName="registry-server" Jan 31 04:56:39 crc kubenswrapper[4832]: I0131 04:56:39.975364 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f95fd2ef-40eb-4eac-9868-487b15a210fc" containerName="registry-server" Jan 31 04:56:39 crc kubenswrapper[4832]: E0131 04:56:39.975377 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f95fd2ef-40eb-4eac-9868-487b15a210fc" containerName="extract-utilities" Jan 31 04:56:39 crc kubenswrapper[4832]: I0131 04:56:39.975385 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f95fd2ef-40eb-4eac-9868-487b15a210fc" containerName="extract-utilities" Jan 31 04:56:39 crc kubenswrapper[4832]: I0131 04:56:39.975485 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f95fd2ef-40eb-4eac-9868-487b15a210fc" containerName="registry-server" Jan 31 04:56:39 crc kubenswrapper[4832]: I0131 04:56:39.978061 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:39 crc kubenswrapper[4832]: I0131 04:56:39.987785 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rl7r"] Jan 31 04:56:40 crc kubenswrapper[4832]: I0131 04:56:40.093695 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/367966b3-161b-4abd-badc-c1fddc3637c9-catalog-content\") pod \"certified-operators-6rl7r\" (UID: \"367966b3-161b-4abd-badc-c1fddc3637c9\") " pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:40 crc kubenswrapper[4832]: I0131 04:56:40.094034 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs96w\" (UniqueName: \"kubernetes.io/projected/367966b3-161b-4abd-badc-c1fddc3637c9-kube-api-access-vs96w\") pod \"certified-operators-6rl7r\" (UID: \"367966b3-161b-4abd-badc-c1fddc3637c9\") " pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:40 crc kubenswrapper[4832]: I0131 04:56:40.094160 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/367966b3-161b-4abd-badc-c1fddc3637c9-utilities\") pod \"certified-operators-6rl7r\" (UID: \"367966b3-161b-4abd-badc-c1fddc3637c9\") " pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:40 crc kubenswrapper[4832]: I0131 04:56:40.196084 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs96w\" (UniqueName: \"kubernetes.io/projected/367966b3-161b-4abd-badc-c1fddc3637c9-kube-api-access-vs96w\") pod \"certified-operators-6rl7r\" (UID: \"367966b3-161b-4abd-badc-c1fddc3637c9\") " pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:40 crc kubenswrapper[4832]: I0131 04:56:40.196160 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/367966b3-161b-4abd-badc-c1fddc3637c9-utilities\") pod \"certified-operators-6rl7r\" (UID: \"367966b3-161b-4abd-badc-c1fddc3637c9\") " pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:40 crc kubenswrapper[4832]: I0131 04:56:40.196220 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/367966b3-161b-4abd-badc-c1fddc3637c9-catalog-content\") pod \"certified-operators-6rl7r\" (UID: \"367966b3-161b-4abd-badc-c1fddc3637c9\") " pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:40 crc kubenswrapper[4832]: I0131 04:56:40.196812 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/367966b3-161b-4abd-badc-c1fddc3637c9-catalog-content\") pod \"certified-operators-6rl7r\" (UID: \"367966b3-161b-4abd-badc-c1fddc3637c9\") " pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:40 crc kubenswrapper[4832]: I0131 04:56:40.196852 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/367966b3-161b-4abd-badc-c1fddc3637c9-utilities\") pod \"certified-operators-6rl7r\" (UID: \"367966b3-161b-4abd-badc-c1fddc3637c9\") " pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:40 crc kubenswrapper[4832]: I0131 04:56:40.228587 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs96w\" (UniqueName: \"kubernetes.io/projected/367966b3-161b-4abd-badc-c1fddc3637c9-kube-api-access-vs96w\") pod \"certified-operators-6rl7r\" (UID: \"367966b3-161b-4abd-badc-c1fddc3637c9\") " pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:40 crc kubenswrapper[4832]: I0131 04:56:40.297606 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:40 crc kubenswrapper[4832]: I0131 04:56:40.837710 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rl7r"] Jan 31 04:56:40 crc kubenswrapper[4832]: W0131 04:56:40.845741 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod367966b3_161b_4abd_badc_c1fddc3637c9.slice/crio-52b90cea33528b79ef2bf10c7b268d0eb6407fa72a96fdc30b2c564db6dcdb44 WatchSource:0}: Error finding container 52b90cea33528b79ef2bf10c7b268d0eb6407fa72a96fdc30b2c564db6dcdb44: Status 404 returned error can't find the container with id 52b90cea33528b79ef2bf10c7b268d0eb6407fa72a96fdc30b2c564db6dcdb44 Jan 31 04:56:40 crc kubenswrapper[4832]: I0131 04:56:40.991764 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rl7r" event={"ID":"367966b3-161b-4abd-badc-c1fddc3637c9","Type":"ContainerStarted","Data":"d7f77c330ef9301d92f7838d1de06a829db48a4e58eeeae5b92602db256b8418"} Jan 31 04:56:40 crc kubenswrapper[4832]: I0131 04:56:40.992087 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rl7r" event={"ID":"367966b3-161b-4abd-badc-c1fddc3637c9","Type":"ContainerStarted","Data":"52b90cea33528b79ef2bf10c7b268d0eb6407fa72a96fdc30b2c564db6dcdb44"} Jan 31 04:56:41 crc kubenswrapper[4832]: I0131 04:56:41.999526 4832 generic.go:334] "Generic (PLEG): container finished" podID="367966b3-161b-4abd-badc-c1fddc3637c9" containerID="d7f77c330ef9301d92f7838d1de06a829db48a4e58eeeae5b92602db256b8418" exitCode=0 Jan 31 04:56:42 crc kubenswrapper[4832]: I0131 04:56:41.999630 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rl7r" event={"ID":"367966b3-161b-4abd-badc-c1fddc3637c9","Type":"ContainerDied","Data":"d7f77c330ef9301d92f7838d1de06a829db48a4e58eeeae5b92602db256b8418"} Jan 31 04:56:43 crc kubenswrapper[4832]: I0131 04:56:43.007586 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rl7r" event={"ID":"367966b3-161b-4abd-badc-c1fddc3637c9","Type":"ContainerStarted","Data":"c5893b8a172191a695fa5b460f4377536b2929edec5b4c4744881778f32552fd"} Jan 31 04:56:44 crc kubenswrapper[4832]: I0131 04:56:44.019061 4832 generic.go:334] "Generic (PLEG): container finished" podID="367966b3-161b-4abd-badc-c1fddc3637c9" containerID="c5893b8a172191a695fa5b460f4377536b2929edec5b4c4744881778f32552fd" exitCode=0 Jan 31 04:56:44 crc kubenswrapper[4832]: I0131 04:56:44.019132 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rl7r" event={"ID":"367966b3-161b-4abd-badc-c1fddc3637c9","Type":"ContainerDied","Data":"c5893b8a172191a695fa5b460f4377536b2929edec5b4c4744881778f32552fd"} Jan 31 04:56:45 crc kubenswrapper[4832]: I0131 04:56:45.027361 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rl7r" event={"ID":"367966b3-161b-4abd-badc-c1fddc3637c9","Type":"ContainerStarted","Data":"5b1aabbc5140270599ec41767661c39f4fc2d0074ba0bd4d105d8c843af52f77"} Jan 31 04:56:45 crc kubenswrapper[4832]: I0131 04:56:45.045493 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6rl7r" podStartSLOduration=3.629803727 podStartE2EDuration="6.045473928s" podCreationTimestamp="2026-01-31 04:56:39 +0000 UTC" firstStartedPulling="2026-01-31 04:56:42.00166772 +0000 UTC m=+810.950489405" lastFinishedPulling="2026-01-31 04:56:44.417337911 +0000 UTC m=+813.366159606" observedRunningTime="2026-01-31 04:56:45.04294074 +0000 UTC m=+813.991762435" watchObservedRunningTime="2026-01-31 04:56:45.045473928 +0000 UTC m=+813.994295613" Jan 31 04:56:48 crc kubenswrapper[4832]: I0131 04:56:48.088934 4832 patch_prober.go:28] interesting pod/dns-default-zz6xx container/dns namespace/openshift-dns: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=kubernetes Jan 31 04:56:48 crc kubenswrapper[4832]: I0131 04:56:48.089692 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-zz6xx" podUID="db0f0a11-f2c4-4358-8a5a-f6f992f0efc7" containerName="dns" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 04:56:50 crc kubenswrapper[4832]: I0131 04:56:50.298530 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:50 crc kubenswrapper[4832]: I0131 04:56:50.300832 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:50 crc kubenswrapper[4832]: I0131 04:56:50.371875 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:51 crc kubenswrapper[4832]: I0131 04:56:51.142195 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:51 crc kubenswrapper[4832]: I0131 04:56:51.200390 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rl7r"] Jan 31 04:56:53 crc kubenswrapper[4832]: I0131 04:56:53.087472 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6rl7r" podUID="367966b3-161b-4abd-badc-c1fddc3637c9" containerName="registry-server" containerID="cri-o://5b1aabbc5140270599ec41767661c39f4fc2d0074ba0bd4d105d8c843af52f77" gracePeriod=2 Jan 31 04:56:53 crc kubenswrapper[4832]: I0131 04:56:53.523161 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:53 crc kubenswrapper[4832]: I0131 04:56:53.600069 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/367966b3-161b-4abd-badc-c1fddc3637c9-catalog-content\") pod \"367966b3-161b-4abd-badc-c1fddc3637c9\" (UID: \"367966b3-161b-4abd-badc-c1fddc3637c9\") " Jan 31 04:56:53 crc kubenswrapper[4832]: I0131 04:56:53.600162 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs96w\" (UniqueName: \"kubernetes.io/projected/367966b3-161b-4abd-badc-c1fddc3637c9-kube-api-access-vs96w\") pod \"367966b3-161b-4abd-badc-c1fddc3637c9\" (UID: \"367966b3-161b-4abd-badc-c1fddc3637c9\") " Jan 31 04:56:53 crc kubenswrapper[4832]: I0131 04:56:53.600262 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/367966b3-161b-4abd-badc-c1fddc3637c9-utilities\") pod \"367966b3-161b-4abd-badc-c1fddc3637c9\" (UID: \"367966b3-161b-4abd-badc-c1fddc3637c9\") " Jan 31 04:56:53 crc kubenswrapper[4832]: I0131 04:56:53.601351 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/367966b3-161b-4abd-badc-c1fddc3637c9-utilities" (OuterVolumeSpecName: "utilities") pod "367966b3-161b-4abd-badc-c1fddc3637c9" (UID: "367966b3-161b-4abd-badc-c1fddc3637c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:56:53 crc kubenswrapper[4832]: I0131 04:56:53.609037 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/367966b3-161b-4abd-badc-c1fddc3637c9-kube-api-access-vs96w" (OuterVolumeSpecName: "kube-api-access-vs96w") pod "367966b3-161b-4abd-badc-c1fddc3637c9" (UID: "367966b3-161b-4abd-badc-c1fddc3637c9"). InnerVolumeSpecName "kube-api-access-vs96w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:56:53 crc kubenswrapper[4832]: I0131 04:56:53.702605 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs96w\" (UniqueName: \"kubernetes.io/projected/367966b3-161b-4abd-badc-c1fddc3637c9-kube-api-access-vs96w\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:53 crc kubenswrapper[4832]: I0131 04:56:53.703007 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/367966b3-161b-4abd-badc-c1fddc3637c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:53 crc kubenswrapper[4832]: I0131 04:56:53.847984 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/367966b3-161b-4abd-badc-c1fddc3637c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "367966b3-161b-4abd-badc-c1fddc3637c9" (UID: "367966b3-161b-4abd-badc-c1fddc3637c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:56:53 crc kubenswrapper[4832]: I0131 04:56:53.906094 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/367966b3-161b-4abd-badc-c1fddc3637c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:54 crc kubenswrapper[4832]: I0131 04:56:54.104898 4832 generic.go:334] "Generic (PLEG): container finished" podID="367966b3-161b-4abd-badc-c1fddc3637c9" containerID="5b1aabbc5140270599ec41767661c39f4fc2d0074ba0bd4d105d8c843af52f77" exitCode=0 Jan 31 04:56:54 crc kubenswrapper[4832]: I0131 04:56:54.104976 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rl7r" event={"ID":"367966b3-161b-4abd-badc-c1fddc3637c9","Type":"ContainerDied","Data":"5b1aabbc5140270599ec41767661c39f4fc2d0074ba0bd4d105d8c843af52f77"} Jan 31 04:56:54 crc kubenswrapper[4832]: I0131 04:56:54.105026 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rl7r" event={"ID":"367966b3-161b-4abd-badc-c1fddc3637c9","Type":"ContainerDied","Data":"52b90cea33528b79ef2bf10c7b268d0eb6407fa72a96fdc30b2c564db6dcdb44"} Jan 31 04:56:54 crc kubenswrapper[4832]: I0131 04:56:54.105061 4832 scope.go:117] "RemoveContainer" containerID="5b1aabbc5140270599ec41767661c39f4fc2d0074ba0bd4d105d8c843af52f77" Jan 31 04:56:54 crc kubenswrapper[4832]: I0131 04:56:54.105732 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rl7r" Jan 31 04:56:54 crc kubenswrapper[4832]: I0131 04:56:54.137227 4832 scope.go:117] "RemoveContainer" containerID="c5893b8a172191a695fa5b460f4377536b2929edec5b4c4744881778f32552fd" Jan 31 04:56:54 crc kubenswrapper[4832]: I0131 04:56:54.139788 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rl7r"] Jan 31 04:56:54 crc kubenswrapper[4832]: I0131 04:56:54.145257 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6rl7r"] Jan 31 04:56:54 crc kubenswrapper[4832]: I0131 04:56:54.158439 4832 scope.go:117] "RemoveContainer" containerID="d7f77c330ef9301d92f7838d1de06a829db48a4e58eeeae5b92602db256b8418" Jan 31 04:56:54 crc kubenswrapper[4832]: I0131 04:56:54.178604 4832 scope.go:117] "RemoveContainer" containerID="5b1aabbc5140270599ec41767661c39f4fc2d0074ba0bd4d105d8c843af52f77" Jan 31 04:56:54 crc kubenswrapper[4832]: E0131 04:56:54.179354 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b1aabbc5140270599ec41767661c39f4fc2d0074ba0bd4d105d8c843af52f77\": container with ID starting with 5b1aabbc5140270599ec41767661c39f4fc2d0074ba0bd4d105d8c843af52f77 not found: ID does not exist" containerID="5b1aabbc5140270599ec41767661c39f4fc2d0074ba0bd4d105d8c843af52f77" Jan 31 04:56:54 crc kubenswrapper[4832]: I0131 04:56:54.179414 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b1aabbc5140270599ec41767661c39f4fc2d0074ba0bd4d105d8c843af52f77"} err="failed to get container status \"5b1aabbc5140270599ec41767661c39f4fc2d0074ba0bd4d105d8c843af52f77\": rpc error: code = NotFound desc = could not find container \"5b1aabbc5140270599ec41767661c39f4fc2d0074ba0bd4d105d8c843af52f77\": container with ID starting with 5b1aabbc5140270599ec41767661c39f4fc2d0074ba0bd4d105d8c843af52f77 not found: ID does not exist" Jan 31 04:56:54 crc kubenswrapper[4832]: I0131 04:56:54.179448 4832 scope.go:117] "RemoveContainer" containerID="c5893b8a172191a695fa5b460f4377536b2929edec5b4c4744881778f32552fd" Jan 31 04:56:54 crc kubenswrapper[4832]: E0131 04:56:54.180010 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5893b8a172191a695fa5b460f4377536b2929edec5b4c4744881778f32552fd\": container with ID starting with c5893b8a172191a695fa5b460f4377536b2929edec5b4c4744881778f32552fd not found: ID does not exist" containerID="c5893b8a172191a695fa5b460f4377536b2929edec5b4c4744881778f32552fd" Jan 31 04:56:54 crc kubenswrapper[4832]: I0131 04:56:54.180052 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5893b8a172191a695fa5b460f4377536b2929edec5b4c4744881778f32552fd"} err="failed to get container status \"c5893b8a172191a695fa5b460f4377536b2929edec5b4c4744881778f32552fd\": rpc error: code = NotFound desc = could not find container \"c5893b8a172191a695fa5b460f4377536b2929edec5b4c4744881778f32552fd\": container with ID starting with c5893b8a172191a695fa5b460f4377536b2929edec5b4c4744881778f32552fd not found: ID does not exist" Jan 31 04:56:54 crc kubenswrapper[4832]: I0131 04:56:54.180083 4832 scope.go:117] "RemoveContainer" containerID="d7f77c330ef9301d92f7838d1de06a829db48a4e58eeeae5b92602db256b8418" Jan 31 04:56:54 crc kubenswrapper[4832]: E0131 04:56:54.180366 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7f77c330ef9301d92f7838d1de06a829db48a4e58eeeae5b92602db256b8418\": container with ID starting with d7f77c330ef9301d92f7838d1de06a829db48a4e58eeeae5b92602db256b8418 not found: ID does not exist" containerID="d7f77c330ef9301d92f7838d1de06a829db48a4e58eeeae5b92602db256b8418" Jan 31 04:56:54 crc kubenswrapper[4832]: I0131 04:56:54.180391 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7f77c330ef9301d92f7838d1de06a829db48a4e58eeeae5b92602db256b8418"} err="failed to get container status \"d7f77c330ef9301d92f7838d1de06a829db48a4e58eeeae5b92602db256b8418\": rpc error: code = NotFound desc = could not find container \"d7f77c330ef9301d92f7838d1de06a829db48a4e58eeeae5b92602db256b8418\": container with ID starting with d7f77c330ef9301d92f7838d1de06a829db48a4e58eeeae5b92602db256b8418 not found: ID does not exist" Jan 31 04:56:55 crc kubenswrapper[4832]: I0131 04:56:55.873615 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="367966b3-161b-4abd-badc-c1fddc3637c9" path="/var/lib/kubelet/pods/367966b3-161b-4abd-badc-c1fddc3637c9/volumes" Jan 31 04:56:58 crc kubenswrapper[4832]: I0131 04:56:58.490734 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8567bf5564-5cjq7" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.287212 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-mkfvm"] Jan 31 04:56:59 crc kubenswrapper[4832]: E0131 04:56:59.287509 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="367966b3-161b-4abd-badc-c1fddc3637c9" containerName="extract-content" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.287525 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="367966b3-161b-4abd-badc-c1fddc3637c9" containerName="extract-content" Jan 31 04:56:59 crc kubenswrapper[4832]: E0131 04:56:59.287539 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="367966b3-161b-4abd-badc-c1fddc3637c9" containerName="extract-utilities" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.287545 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="367966b3-161b-4abd-badc-c1fddc3637c9" containerName="extract-utilities" Jan 31 04:56:59 crc kubenswrapper[4832]: E0131 04:56:59.287555 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="367966b3-161b-4abd-badc-c1fddc3637c9" containerName="registry-server" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.287614 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="367966b3-161b-4abd-badc-c1fddc3637c9" containerName="registry-server" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.287722 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="367966b3-161b-4abd-badc-c1fddc3637c9" containerName="registry-server" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.289944 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.295522 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.295654 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-l44wv" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.297234 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.308113 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6"] Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.309086 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.312894 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.329540 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6"] Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.377327 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-cbkcm"] Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.378225 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cbkcm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.381707 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.381965 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.382078 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.382252 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-566bb" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.393313 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-d4lls"] Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.396521 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69815ead-05b6-4300-b463-b8781a92335c-metrics-certs\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.396591 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/69815ead-05b6-4300-b463-b8781a92335c-frr-startup\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.396628 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77mjs\" (UniqueName: \"kubernetes.io/projected/c10722d0-a029-4829-87c5-3f4340ea19ff-kube-api-access-77mjs\") pod \"frr-k8s-webhook-server-7df86c4f6c-wd6p6\" (UID: \"c10722d0-a029-4829-87c5-3f4340ea19ff\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.396651 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/69815ead-05b6-4300-b463-b8781a92335c-frr-sockets\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.396673 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/69815ead-05b6-4300-b463-b8781a92335c-frr-conf\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.396697 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/69815ead-05b6-4300-b463-b8781a92335c-reloader\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.396721 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c10722d0-a029-4829-87c5-3f4340ea19ff-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wd6p6\" (UID: \"c10722d0-a029-4829-87c5-3f4340ea19ff\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.396739 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5znk\" (UniqueName: \"kubernetes.io/projected/69815ead-05b6-4300-b463-b8781a92335c-kube-api-access-r5znk\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.396772 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/69815ead-05b6-4300-b463-b8781a92335c-metrics\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.401415 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-d4lls" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.405398 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.415595 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-d4lls"] Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.498567 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8624b4c-df9a-43b3-8f8c-99b9290a7956-metrics-certs\") pod \"speaker-cbkcm\" (UID: \"b8624b4c-df9a-43b3-8f8c-99b9290a7956\") " pod="metallb-system/speaker-cbkcm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.498741 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/69815ead-05b6-4300-b463-b8781a92335c-frr-startup\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.498838 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77mjs\" (UniqueName: \"kubernetes.io/projected/c10722d0-a029-4829-87c5-3f4340ea19ff-kube-api-access-77mjs\") pod \"frr-k8s-webhook-server-7df86c4f6c-wd6p6\" (UID: \"c10722d0-a029-4829-87c5-3f4340ea19ff\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.498868 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b8624b4c-df9a-43b3-8f8c-99b9290a7956-metallb-excludel2\") pod \"speaker-cbkcm\" (UID: \"b8624b4c-df9a-43b3-8f8c-99b9290a7956\") " pod="metallb-system/speaker-cbkcm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.498908 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/69815ead-05b6-4300-b463-b8781a92335c-frr-sockets\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.498934 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b8624b4c-df9a-43b3-8f8c-99b9290a7956-memberlist\") pod \"speaker-cbkcm\" (UID: \"b8624b4c-df9a-43b3-8f8c-99b9290a7956\") " pod="metallb-system/speaker-cbkcm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.498963 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/69815ead-05b6-4300-b463-b8781a92335c-frr-conf\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.499030 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/69815ead-05b6-4300-b463-b8781a92335c-reloader\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.499066 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4298\" (UniqueName: \"kubernetes.io/projected/b8624b4c-df9a-43b3-8f8c-99b9290a7956-kube-api-access-d4298\") pod \"speaker-cbkcm\" (UID: \"b8624b4c-df9a-43b3-8f8c-99b9290a7956\") " pod="metallb-system/speaker-cbkcm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.499100 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c10722d0-a029-4829-87c5-3f4340ea19ff-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wd6p6\" (UID: \"c10722d0-a029-4829-87c5-3f4340ea19ff\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.499128 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5znk\" (UniqueName: \"kubernetes.io/projected/69815ead-05b6-4300-b463-b8781a92335c-kube-api-access-r5znk\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.499181 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7vfk\" (UniqueName: \"kubernetes.io/projected/4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5-kube-api-access-r7vfk\") pod \"controller-6968d8fdc4-d4lls\" (UID: \"4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5\") " pod="metallb-system/controller-6968d8fdc4-d4lls" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.499211 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5-metrics-certs\") pod \"controller-6968d8fdc4-d4lls\" (UID: \"4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5\") " pod="metallb-system/controller-6968d8fdc4-d4lls" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.499236 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/69815ead-05b6-4300-b463-b8781a92335c-metrics\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.499315 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5-cert\") pod \"controller-6968d8fdc4-d4lls\" (UID: \"4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5\") " pod="metallb-system/controller-6968d8fdc4-d4lls" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.499372 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69815ead-05b6-4300-b463-b8781a92335c-metrics-certs\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.499784 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/69815ead-05b6-4300-b463-b8781a92335c-reloader\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.499922 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/69815ead-05b6-4300-b463-b8781a92335c-frr-sockets\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.500189 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/69815ead-05b6-4300-b463-b8781a92335c-frr-conf\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.500264 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/69815ead-05b6-4300-b463-b8781a92335c-metrics\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.500330 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/69815ead-05b6-4300-b463-b8781a92335c-frr-startup\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.517643 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69815ead-05b6-4300-b463-b8781a92335c-metrics-certs\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.521105 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c10722d0-a029-4829-87c5-3f4340ea19ff-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wd6p6\" (UID: \"c10722d0-a029-4829-87c5-3f4340ea19ff\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.524835 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5znk\" (UniqueName: \"kubernetes.io/projected/69815ead-05b6-4300-b463-b8781a92335c-kube-api-access-r5znk\") pod \"frr-k8s-mkfvm\" (UID: \"69815ead-05b6-4300-b463-b8781a92335c\") " pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.525311 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77mjs\" (UniqueName: \"kubernetes.io/projected/c10722d0-a029-4829-87c5-3f4340ea19ff-kube-api-access-77mjs\") pod \"frr-k8s-webhook-server-7df86c4f6c-wd6p6\" (UID: \"c10722d0-a029-4829-87c5-3f4340ea19ff\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.600935 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b8624b4c-df9a-43b3-8f8c-99b9290a7956-metallb-excludel2\") pod \"speaker-cbkcm\" (UID: \"b8624b4c-df9a-43b3-8f8c-99b9290a7956\") " pod="metallb-system/speaker-cbkcm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.601015 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b8624b4c-df9a-43b3-8f8c-99b9290a7956-memberlist\") pod \"speaker-cbkcm\" (UID: \"b8624b4c-df9a-43b3-8f8c-99b9290a7956\") " pod="metallb-system/speaker-cbkcm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.601063 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4298\" (UniqueName: \"kubernetes.io/projected/b8624b4c-df9a-43b3-8f8c-99b9290a7956-kube-api-access-d4298\") pod \"speaker-cbkcm\" (UID: \"b8624b4c-df9a-43b3-8f8c-99b9290a7956\") " pod="metallb-system/speaker-cbkcm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.601109 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7vfk\" (UniqueName: \"kubernetes.io/projected/4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5-kube-api-access-r7vfk\") pod \"controller-6968d8fdc4-d4lls\" (UID: \"4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5\") " pod="metallb-system/controller-6968d8fdc4-d4lls" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.601137 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5-metrics-certs\") pod \"controller-6968d8fdc4-d4lls\" (UID: \"4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5\") " pod="metallb-system/controller-6968d8fdc4-d4lls" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.601183 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5-cert\") pod \"controller-6968d8fdc4-d4lls\" (UID: \"4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5\") " pod="metallb-system/controller-6968d8fdc4-d4lls" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.601255 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8624b4c-df9a-43b3-8f8c-99b9290a7956-metrics-certs\") pod \"speaker-cbkcm\" (UID: \"b8624b4c-df9a-43b3-8f8c-99b9290a7956\") " pod="metallb-system/speaker-cbkcm" Jan 31 04:56:59 crc kubenswrapper[4832]: E0131 04:56:59.601254 4832 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 04:56:59 crc kubenswrapper[4832]: E0131 04:56:59.601385 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8624b4c-df9a-43b3-8f8c-99b9290a7956-memberlist podName:b8624b4c-df9a-43b3-8f8c-99b9290a7956 nodeName:}" failed. No retries permitted until 2026-01-31 04:57:00.101362445 +0000 UTC m=+829.050184120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b8624b4c-df9a-43b3-8f8c-99b9290a7956-memberlist") pod "speaker-cbkcm" (UID: "b8624b4c-df9a-43b3-8f8c-99b9290a7956") : secret "metallb-memberlist" not found Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.601780 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b8624b4c-df9a-43b3-8f8c-99b9290a7956-metallb-excludel2\") pod \"speaker-cbkcm\" (UID: \"b8624b4c-df9a-43b3-8f8c-99b9290a7956\") " pod="metallb-system/speaker-cbkcm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.606831 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5-cert\") pod \"controller-6968d8fdc4-d4lls\" (UID: \"4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5\") " pod="metallb-system/controller-6968d8fdc4-d4lls" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.607070 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5-metrics-certs\") pod \"controller-6968d8fdc4-d4lls\" (UID: \"4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5\") " pod="metallb-system/controller-6968d8fdc4-d4lls" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.607946 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8624b4c-df9a-43b3-8f8c-99b9290a7956-metrics-certs\") pod \"speaker-cbkcm\" (UID: \"b8624b4c-df9a-43b3-8f8c-99b9290a7956\") " pod="metallb-system/speaker-cbkcm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.615794 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.625898 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4298\" (UniqueName: \"kubernetes.io/projected/b8624b4c-df9a-43b3-8f8c-99b9290a7956-kube-api-access-d4298\") pod \"speaker-cbkcm\" (UID: \"b8624b4c-df9a-43b3-8f8c-99b9290a7956\") " pod="metallb-system/speaker-cbkcm" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.629267 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7vfk\" (UniqueName: \"kubernetes.io/projected/4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5-kube-api-access-r7vfk\") pod \"controller-6968d8fdc4-d4lls\" (UID: \"4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5\") " pod="metallb-system/controller-6968d8fdc4-d4lls" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.631933 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.721163 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-d4lls" Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.893965 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6"] Jan 31 04:56:59 crc kubenswrapper[4832]: I0131 04:56:59.984477 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-d4lls"] Jan 31 04:56:59 crc kubenswrapper[4832]: W0131 04:56:59.989134 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fdf3988_1bb7_4cd7_83fb_b656b3ea1ec5.slice/crio-37bbdd3130ad1b6bf1920030f3c92b15c496e04382fb40127706d0c9811c707a WatchSource:0}: Error finding container 37bbdd3130ad1b6bf1920030f3c92b15c496e04382fb40127706d0c9811c707a: Status 404 returned error can't find the container with id 37bbdd3130ad1b6bf1920030f3c92b15c496e04382fb40127706d0c9811c707a Jan 31 04:57:00 crc kubenswrapper[4832]: I0131 04:57:00.113086 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b8624b4c-df9a-43b3-8f8c-99b9290a7956-memberlist\") pod \"speaker-cbkcm\" (UID: \"b8624b4c-df9a-43b3-8f8c-99b9290a7956\") " pod="metallb-system/speaker-cbkcm" Jan 31 04:57:00 crc kubenswrapper[4832]: E0131 04:57:00.113230 4832 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 04:57:00 crc kubenswrapper[4832]: E0131 04:57:00.113286 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8624b4c-df9a-43b3-8f8c-99b9290a7956-memberlist podName:b8624b4c-df9a-43b3-8f8c-99b9290a7956 nodeName:}" failed. No retries permitted until 2026-01-31 04:57:01.113271783 +0000 UTC m=+830.062093458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b8624b4c-df9a-43b3-8f8c-99b9290a7956-memberlist") pod "speaker-cbkcm" (UID: "b8624b4c-df9a-43b3-8f8c-99b9290a7956") : secret "metallb-memberlist" not found Jan 31 04:57:00 crc kubenswrapper[4832]: I0131 04:57:00.178928 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-d4lls" event={"ID":"4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5","Type":"ContainerStarted","Data":"173e222b1157df3d5f1fc89c9d7ffbeafd6b5eb6c3d244e4a6683566a865318e"} Jan 31 04:57:00 crc kubenswrapper[4832]: I0131 04:57:00.178997 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-d4lls" event={"ID":"4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5","Type":"ContainerStarted","Data":"37bbdd3130ad1b6bf1920030f3c92b15c496e04382fb40127706d0c9811c707a"} Jan 31 04:57:00 crc kubenswrapper[4832]: I0131 04:57:00.180758 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mkfvm" event={"ID":"69815ead-05b6-4300-b463-b8781a92335c","Type":"ContainerStarted","Data":"c371499f9bff37c23e8c2365f482471db7ce59e77cb79017c0fd1433a00bd1e2"} Jan 31 04:57:00 crc kubenswrapper[4832]: I0131 04:57:00.183328 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6" event={"ID":"c10722d0-a029-4829-87c5-3f4340ea19ff","Type":"ContainerStarted","Data":"164bbcb7d2cb94ef63c29d62c92b40e7689ca1feebec1b3088b56782c3b4c690"} Jan 31 04:57:01 crc kubenswrapper[4832]: I0131 04:57:01.127683 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b8624b4c-df9a-43b3-8f8c-99b9290a7956-memberlist\") pod \"speaker-cbkcm\" (UID: \"b8624b4c-df9a-43b3-8f8c-99b9290a7956\") " pod="metallb-system/speaker-cbkcm" Jan 31 04:57:01 crc kubenswrapper[4832]: I0131 04:57:01.144354 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b8624b4c-df9a-43b3-8f8c-99b9290a7956-memberlist\") pod \"speaker-cbkcm\" (UID: \"b8624b4c-df9a-43b3-8f8c-99b9290a7956\") " pod="metallb-system/speaker-cbkcm" Jan 31 04:57:01 crc kubenswrapper[4832]: I0131 04:57:01.197043 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-d4lls" event={"ID":"4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5","Type":"ContainerStarted","Data":"35be24102c20aea7bf6c386e580e562fa72237e198d47877382084f8dd1c76af"} Jan 31 04:57:01 crc kubenswrapper[4832]: I0131 04:57:01.198260 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cbkcm" Jan 31 04:57:01 crc kubenswrapper[4832]: I0131 04:57:01.198892 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-d4lls" Jan 31 04:57:01 crc kubenswrapper[4832]: I0131 04:57:01.232929 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-d4lls" podStartSLOduration=2.232901134 podStartE2EDuration="2.232901134s" podCreationTimestamp="2026-01-31 04:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:57:01.229772836 +0000 UTC m=+830.178594521" watchObservedRunningTime="2026-01-31 04:57:01.232901134 +0000 UTC m=+830.181722819" Jan 31 04:57:02 crc kubenswrapper[4832]: I0131 04:57:02.207843 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cbkcm" event={"ID":"b8624b4c-df9a-43b3-8f8c-99b9290a7956","Type":"ContainerStarted","Data":"16f3a5837598a2d3f37a8c1f8d884444b66c4537db9f67d0dd3beab741492f22"} Jan 31 04:57:02 crc kubenswrapper[4832]: I0131 04:57:02.208191 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cbkcm" event={"ID":"b8624b4c-df9a-43b3-8f8c-99b9290a7956","Type":"ContainerStarted","Data":"983b4ba9247d9ff6be4f181bb17a95282803b2bc17e0ea347bf726b3bd02c8b0"} Jan 31 04:57:02 crc kubenswrapper[4832]: I0131 04:57:02.208208 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cbkcm" event={"ID":"b8624b4c-df9a-43b3-8f8c-99b9290a7956","Type":"ContainerStarted","Data":"82bd7b09ad9ceeb1717e20a3070a5497983be51933c7c2cb2c35d4876ea1c684"} Jan 31 04:57:02 crc kubenswrapper[4832]: I0131 04:57:02.208934 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-cbkcm" Jan 31 04:57:02 crc kubenswrapper[4832]: I0131 04:57:02.231114 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-cbkcm" podStartSLOduration=3.231086383 podStartE2EDuration="3.231086383s" podCreationTimestamp="2026-01-31 04:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:57:02.226646635 +0000 UTC m=+831.175468340" watchObservedRunningTime="2026-01-31 04:57:02.231086383 +0000 UTC m=+831.179908068" Jan 31 04:57:08 crc kubenswrapper[4832]: I0131 04:57:08.278159 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6" event={"ID":"c10722d0-a029-4829-87c5-3f4340ea19ff","Type":"ContainerStarted","Data":"de7e4682bc5e69f761863cebc7ebd7821e42f38e290a429084349826e228d4de"} Jan 31 04:57:08 crc kubenswrapper[4832]: I0131 04:57:08.279007 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6" Jan 31 04:57:08 crc kubenswrapper[4832]: I0131 04:57:08.280954 4832 generic.go:334] "Generic (PLEG): container finished" podID="69815ead-05b6-4300-b463-b8781a92335c" containerID="e83514e02bc64609f86a86e6d45a342174aedd301e75bd4886a700301823381b" exitCode=0 Jan 31 04:57:08 crc kubenswrapper[4832]: I0131 04:57:08.281004 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mkfvm" event={"ID":"69815ead-05b6-4300-b463-b8781a92335c","Type":"ContainerDied","Data":"e83514e02bc64609f86a86e6d45a342174aedd301e75bd4886a700301823381b"} Jan 31 04:57:08 crc kubenswrapper[4832]: I0131 04:57:08.344153 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6" podStartSLOduration=1.833562341 podStartE2EDuration="9.344124308s" podCreationTimestamp="2026-01-31 04:56:59 +0000 UTC" firstStartedPulling="2026-01-31 04:56:59.927443491 +0000 UTC m=+828.876265186" lastFinishedPulling="2026-01-31 04:57:07.438005428 +0000 UTC m=+836.386827153" observedRunningTime="2026-01-31 04:57:08.308331937 +0000 UTC m=+837.257153662" watchObservedRunningTime="2026-01-31 04:57:08.344124308 +0000 UTC m=+837.292946033" Jan 31 04:57:09 crc kubenswrapper[4832]: I0131 04:57:09.294058 4832 generic.go:334] "Generic (PLEG): container finished" podID="69815ead-05b6-4300-b463-b8781a92335c" containerID="73b7bcfd73e09c18d6ab044015fa4eb1d715e021cb76c7e03b07ecb39dc8189c" exitCode=0 Jan 31 04:57:09 crc kubenswrapper[4832]: I0131 04:57:09.294152 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mkfvm" event={"ID":"69815ead-05b6-4300-b463-b8781a92335c","Type":"ContainerDied","Data":"73b7bcfd73e09c18d6ab044015fa4eb1d715e021cb76c7e03b07ecb39dc8189c"} Jan 31 04:57:10 crc kubenswrapper[4832]: I0131 04:57:10.305032 4832 generic.go:334] "Generic (PLEG): container finished" podID="69815ead-05b6-4300-b463-b8781a92335c" containerID="556039f1c8d01c22b234f90ef4477aad5ae14fcbe637217846eceaaa30bf2841" exitCode=0 Jan 31 04:57:10 crc kubenswrapper[4832]: I0131 04:57:10.305080 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mkfvm" event={"ID":"69815ead-05b6-4300-b463-b8781a92335c","Type":"ContainerDied","Data":"556039f1c8d01c22b234f90ef4477aad5ae14fcbe637217846eceaaa30bf2841"} Jan 31 04:57:11 crc kubenswrapper[4832]: I0131 04:57:11.207136 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-cbkcm" Jan 31 04:57:11 crc kubenswrapper[4832]: I0131 04:57:11.343399 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mkfvm" event={"ID":"69815ead-05b6-4300-b463-b8781a92335c","Type":"ContainerStarted","Data":"46d9bd70190c7838df8e984ce14a5d227c71a18c96a8c31f5611347f1c05446a"} Jan 31 04:57:11 crc kubenswrapper[4832]: I0131 04:57:11.343443 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mkfvm" event={"ID":"69815ead-05b6-4300-b463-b8781a92335c","Type":"ContainerStarted","Data":"569adb0dcbb3fd3bf1a37524dad5aa1643da58f58de0104bcad8f0828021b6da"} Jan 31 04:57:11 crc kubenswrapper[4832]: I0131 04:57:11.343455 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mkfvm" event={"ID":"69815ead-05b6-4300-b463-b8781a92335c","Type":"ContainerStarted","Data":"124c8d8fb628cf6c56c5dce0ab2d11769eaf3d91eb62aadc2e124c1b01cc9449"} Jan 31 04:57:11 crc kubenswrapper[4832]: I0131 04:57:11.343464 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mkfvm" event={"ID":"69815ead-05b6-4300-b463-b8781a92335c","Type":"ContainerStarted","Data":"d38bd736a1dda4ad258a77d747b1b6afe153bc2db52810ada0d538fbe9d31814"} Jan 31 04:57:11 crc kubenswrapper[4832]: I0131 04:57:11.343475 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mkfvm" event={"ID":"69815ead-05b6-4300-b463-b8781a92335c","Type":"ContainerStarted","Data":"4a1d5db88be50d55f9c93944ad019d95f5dfad34ec7630a4267040b6251ef370"} Jan 31 04:57:12 crc kubenswrapper[4832]: I0131 04:57:12.359787 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mkfvm" event={"ID":"69815ead-05b6-4300-b463-b8781a92335c","Type":"ContainerStarted","Data":"84eea438fc3948591ec0d07d8085d94b88129692ff28fc3716fe1fcc21efd0a4"} Jan 31 04:57:12 crc kubenswrapper[4832]: I0131 04:57:12.360091 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:57:12 crc kubenswrapper[4832]: I0131 04:57:12.394351 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-mkfvm" podStartSLOduration=5.763062875 podStartE2EDuration="13.39431983s" podCreationTimestamp="2026-01-31 04:56:59 +0000 UTC" firstStartedPulling="2026-01-31 04:56:59.781139518 +0000 UTC m=+828.729961203" lastFinishedPulling="2026-01-31 04:57:07.412396473 +0000 UTC m=+836.361218158" observedRunningTime="2026-01-31 04:57:12.392405371 +0000 UTC m=+841.341227096" watchObservedRunningTime="2026-01-31 04:57:12.39431983 +0000 UTC m=+841.343141525" Jan 31 04:57:14 crc kubenswrapper[4832]: I0131 04:57:14.078163 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hvvrr"] Jan 31 04:57:14 crc kubenswrapper[4832]: I0131 04:57:14.079298 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hvvrr" Jan 31 04:57:14 crc kubenswrapper[4832]: I0131 04:57:14.081695 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 31 04:57:14 crc kubenswrapper[4832]: I0131 04:57:14.081695 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 31 04:57:14 crc kubenswrapper[4832]: I0131 04:57:14.087742 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hvvrr"] Jan 31 04:57:14 crc kubenswrapper[4832]: I0131 04:57:14.094297 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-tf6zj" Jan 31 04:57:14 crc kubenswrapper[4832]: I0131 04:57:14.167228 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kngf\" (UniqueName: \"kubernetes.io/projected/f2a04142-e1cd-41a2-9eaa-e0362511d7a0-kube-api-access-4kngf\") pod \"openstack-operator-index-hvvrr\" (UID: \"f2a04142-e1cd-41a2-9eaa-e0362511d7a0\") " pod="openstack-operators/openstack-operator-index-hvvrr" Jan 31 04:57:14 crc kubenswrapper[4832]: I0131 04:57:14.269500 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kngf\" (UniqueName: \"kubernetes.io/projected/f2a04142-e1cd-41a2-9eaa-e0362511d7a0-kube-api-access-4kngf\") pod \"openstack-operator-index-hvvrr\" (UID: \"f2a04142-e1cd-41a2-9eaa-e0362511d7a0\") " pod="openstack-operators/openstack-operator-index-hvvrr" Jan 31 04:57:14 crc kubenswrapper[4832]: I0131 04:57:14.289855 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kngf\" (UniqueName: \"kubernetes.io/projected/f2a04142-e1cd-41a2-9eaa-e0362511d7a0-kube-api-access-4kngf\") pod \"openstack-operator-index-hvvrr\" (UID: \"f2a04142-e1cd-41a2-9eaa-e0362511d7a0\") " pod="openstack-operators/openstack-operator-index-hvvrr" Jan 31 04:57:14 crc kubenswrapper[4832]: I0131 04:57:14.395647 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hvvrr" Jan 31 04:57:14 crc kubenswrapper[4832]: I0131 04:57:14.616011 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:57:14 crc kubenswrapper[4832]: I0131 04:57:14.636131 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hvvrr"] Jan 31 04:57:14 crc kubenswrapper[4832]: W0131 04:57:14.643442 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2a04142_e1cd_41a2_9eaa_e0362511d7a0.slice/crio-5b2ad85247f68b6a6686a6ff62b853b2dc959a9cff887e810c6d89cf36d66832 WatchSource:0}: Error finding container 5b2ad85247f68b6a6686a6ff62b853b2dc959a9cff887e810c6d89cf36d66832: Status 404 returned error can't find the container with id 5b2ad85247f68b6a6686a6ff62b853b2dc959a9cff887e810c6d89cf36d66832 Jan 31 04:57:14 crc kubenswrapper[4832]: I0131 04:57:14.669238 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:57:15 crc kubenswrapper[4832]: I0131 04:57:15.379794 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hvvrr" event={"ID":"f2a04142-e1cd-41a2-9eaa-e0362511d7a0","Type":"ContainerStarted","Data":"5b2ad85247f68b6a6686a6ff62b853b2dc959a9cff887e810c6d89cf36d66832"} Jan 31 04:57:18 crc kubenswrapper[4832]: I0131 04:57:18.178247 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hvvrr"] Jan 31 04:57:18 crc kubenswrapper[4832]: I0131 04:57:18.403052 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hvvrr" event={"ID":"f2a04142-e1cd-41a2-9eaa-e0362511d7a0","Type":"ContainerStarted","Data":"b1f6ad1cecca8b7b445ad3f3806ff9f18b15ca390ff8ed44d33768e95c05856b"} Jan 31 04:57:18 crc kubenswrapper[4832]: I0131 04:57:18.424727 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-hvvrr" podStartSLOduration=1.768546757 podStartE2EDuration="4.424701144s" podCreationTimestamp="2026-01-31 04:57:14 +0000 UTC" firstStartedPulling="2026-01-31 04:57:14.646850014 +0000 UTC m=+843.595671699" lastFinishedPulling="2026-01-31 04:57:17.303004361 +0000 UTC m=+846.251826086" observedRunningTime="2026-01-31 04:57:18.419985557 +0000 UTC m=+847.368807282" watchObservedRunningTime="2026-01-31 04:57:18.424701144 +0000 UTC m=+847.373522839" Jan 31 04:57:18 crc kubenswrapper[4832]: I0131 04:57:18.784630 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sxhvd"] Jan 31 04:57:18 crc kubenswrapper[4832]: I0131 04:57:18.785658 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sxhvd" Jan 31 04:57:18 crc kubenswrapper[4832]: I0131 04:57:18.858383 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sxhvd"] Jan 31 04:57:18 crc kubenswrapper[4832]: I0131 04:57:18.948759 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq748\" (UniqueName: \"kubernetes.io/projected/09997099-ea53-4947-b5ec-eaed51db7a12-kube-api-access-cq748\") pod \"openstack-operator-index-sxhvd\" (UID: \"09997099-ea53-4947-b5ec-eaed51db7a12\") " pod="openstack-operators/openstack-operator-index-sxhvd" Jan 31 04:57:19 crc kubenswrapper[4832]: I0131 04:57:19.050845 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq748\" (UniqueName: \"kubernetes.io/projected/09997099-ea53-4947-b5ec-eaed51db7a12-kube-api-access-cq748\") pod \"openstack-operator-index-sxhvd\" (UID: \"09997099-ea53-4947-b5ec-eaed51db7a12\") " pod="openstack-operators/openstack-operator-index-sxhvd" Jan 31 04:57:19 crc kubenswrapper[4832]: I0131 04:57:19.075745 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq748\" (UniqueName: \"kubernetes.io/projected/09997099-ea53-4947-b5ec-eaed51db7a12-kube-api-access-cq748\") pod \"openstack-operator-index-sxhvd\" (UID: \"09997099-ea53-4947-b5ec-eaed51db7a12\") " pod="openstack-operators/openstack-operator-index-sxhvd" Jan 31 04:57:19 crc kubenswrapper[4832]: I0131 04:57:19.137348 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sxhvd" Jan 31 04:57:19 crc kubenswrapper[4832]: I0131 04:57:19.409941 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-hvvrr" podUID="f2a04142-e1cd-41a2-9eaa-e0362511d7a0" containerName="registry-server" containerID="cri-o://b1f6ad1cecca8b7b445ad3f3806ff9f18b15ca390ff8ed44d33768e95c05856b" gracePeriod=2 Jan 31 04:57:19 crc kubenswrapper[4832]: I0131 04:57:19.642303 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sxhvd"] Jan 31 04:57:19 crc kubenswrapper[4832]: W0131 04:57:19.644960 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09997099_ea53_4947_b5ec_eaed51db7a12.slice/crio-daf59608c2a1da167522aad2e400beab10a9e555e43f88c943b899919f5d6987 WatchSource:0}: Error finding container daf59608c2a1da167522aad2e400beab10a9e555e43f88c943b899919f5d6987: Status 404 returned error can't find the container with id daf59608c2a1da167522aad2e400beab10a9e555e43f88c943b899919f5d6987 Jan 31 04:57:19 crc kubenswrapper[4832]: I0131 04:57:19.645226 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd6p6" Jan 31 04:57:19 crc kubenswrapper[4832]: I0131 04:57:19.741081 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-d4lls" Jan 31 04:57:19 crc kubenswrapper[4832]: I0131 04:57:19.802790 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hvvrr" Jan 31 04:57:19 crc kubenswrapper[4832]: I0131 04:57:19.870527 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kngf\" (UniqueName: \"kubernetes.io/projected/f2a04142-e1cd-41a2-9eaa-e0362511d7a0-kube-api-access-4kngf\") pod \"f2a04142-e1cd-41a2-9eaa-e0362511d7a0\" (UID: \"f2a04142-e1cd-41a2-9eaa-e0362511d7a0\") " Jan 31 04:57:19 crc kubenswrapper[4832]: I0131 04:57:19.879143 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a04142-e1cd-41a2-9eaa-e0362511d7a0-kube-api-access-4kngf" (OuterVolumeSpecName: "kube-api-access-4kngf") pod "f2a04142-e1cd-41a2-9eaa-e0362511d7a0" (UID: "f2a04142-e1cd-41a2-9eaa-e0362511d7a0"). InnerVolumeSpecName "kube-api-access-4kngf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:57:19 crc kubenswrapper[4832]: I0131 04:57:19.975903 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kngf\" (UniqueName: \"kubernetes.io/projected/f2a04142-e1cd-41a2-9eaa-e0362511d7a0-kube-api-access-4kngf\") on node \"crc\" DevicePath \"\"" Jan 31 04:57:20 crc kubenswrapper[4832]: I0131 04:57:20.420301 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sxhvd" event={"ID":"09997099-ea53-4947-b5ec-eaed51db7a12","Type":"ContainerStarted","Data":"4baa1e4bb5c544c2db94bac66f39c89a9bb640f4551f0023f81b0bcd96598713"} Jan 31 04:57:20 crc kubenswrapper[4832]: I0131 04:57:20.420379 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sxhvd" event={"ID":"09997099-ea53-4947-b5ec-eaed51db7a12","Type":"ContainerStarted","Data":"daf59608c2a1da167522aad2e400beab10a9e555e43f88c943b899919f5d6987"} Jan 31 04:57:20 crc kubenswrapper[4832]: I0131 04:57:20.424603 4832 generic.go:334] "Generic (PLEG): container finished" podID="f2a04142-e1cd-41a2-9eaa-e0362511d7a0" containerID="b1f6ad1cecca8b7b445ad3f3806ff9f18b15ca390ff8ed44d33768e95c05856b" exitCode=0 Jan 31 04:57:20 crc kubenswrapper[4832]: I0131 04:57:20.424678 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hvvrr" Jan 31 04:57:20 crc kubenswrapper[4832]: I0131 04:57:20.424680 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hvvrr" event={"ID":"f2a04142-e1cd-41a2-9eaa-e0362511d7a0","Type":"ContainerDied","Data":"b1f6ad1cecca8b7b445ad3f3806ff9f18b15ca390ff8ed44d33768e95c05856b"} Jan 31 04:57:20 crc kubenswrapper[4832]: I0131 04:57:20.424840 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hvvrr" event={"ID":"f2a04142-e1cd-41a2-9eaa-e0362511d7a0","Type":"ContainerDied","Data":"5b2ad85247f68b6a6686a6ff62b853b2dc959a9cff887e810c6d89cf36d66832"} Jan 31 04:57:20 crc kubenswrapper[4832]: I0131 04:57:20.424870 4832 scope.go:117] "RemoveContainer" containerID="b1f6ad1cecca8b7b445ad3f3806ff9f18b15ca390ff8ed44d33768e95c05856b" Jan 31 04:57:20 crc kubenswrapper[4832]: I0131 04:57:20.449226 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sxhvd" podStartSLOduration=2.386114157 podStartE2EDuration="2.449199898s" podCreationTimestamp="2026-01-31 04:57:18 +0000 UTC" firstStartedPulling="2026-01-31 04:57:19.653218395 +0000 UTC m=+848.602040080" lastFinishedPulling="2026-01-31 04:57:19.716304126 +0000 UTC m=+848.665125821" observedRunningTime="2026-01-31 04:57:20.440647412 +0000 UTC m=+849.389469097" watchObservedRunningTime="2026-01-31 04:57:20.449199898 +0000 UTC m=+849.398021593" Jan 31 04:57:20 crc kubenswrapper[4832]: I0131 04:57:20.460637 4832 scope.go:117] "RemoveContainer" containerID="b1f6ad1cecca8b7b445ad3f3806ff9f18b15ca390ff8ed44d33768e95c05856b" Jan 31 04:57:20 crc kubenswrapper[4832]: E0131 04:57:20.465866 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1f6ad1cecca8b7b445ad3f3806ff9f18b15ca390ff8ed44d33768e95c05856b\": container with ID starting with b1f6ad1cecca8b7b445ad3f3806ff9f18b15ca390ff8ed44d33768e95c05856b not found: ID does not exist" containerID="b1f6ad1cecca8b7b445ad3f3806ff9f18b15ca390ff8ed44d33768e95c05856b" Jan 31 04:57:20 crc kubenswrapper[4832]: I0131 04:57:20.466088 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1f6ad1cecca8b7b445ad3f3806ff9f18b15ca390ff8ed44d33768e95c05856b"} err="failed to get container status \"b1f6ad1cecca8b7b445ad3f3806ff9f18b15ca390ff8ed44d33768e95c05856b\": rpc error: code = NotFound desc = could not find container \"b1f6ad1cecca8b7b445ad3f3806ff9f18b15ca390ff8ed44d33768e95c05856b\": container with ID starting with b1f6ad1cecca8b7b445ad3f3806ff9f18b15ca390ff8ed44d33768e95c05856b not found: ID does not exist" Jan 31 04:57:20 crc kubenswrapper[4832]: I0131 04:57:20.483526 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hvvrr"] Jan 31 04:57:20 crc kubenswrapper[4832]: I0131 04:57:20.494049 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-hvvrr"] Jan 31 04:57:21 crc kubenswrapper[4832]: I0131 04:57:21.865857 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2a04142-e1cd-41a2-9eaa-e0362511d7a0" path="/var/lib/kubelet/pods/f2a04142-e1cd-41a2-9eaa-e0362511d7a0/volumes" Jan 31 04:57:29 crc kubenswrapper[4832]: I0131 04:57:29.138133 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-sxhvd" Jan 31 04:57:29 crc kubenswrapper[4832]: I0131 04:57:29.139221 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-sxhvd" Jan 31 04:57:29 crc kubenswrapper[4832]: I0131 04:57:29.178359 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-sxhvd" Jan 31 04:57:29 crc kubenswrapper[4832]: I0131 04:57:29.529141 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-sxhvd" Jan 31 04:57:29 crc kubenswrapper[4832]: I0131 04:57:29.621530 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-mkfvm" Jan 31 04:57:35 crc kubenswrapper[4832]: I0131 04:57:35.873138 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm"] Jan 31 04:57:35 crc kubenswrapper[4832]: E0131 04:57:35.874149 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a04142-e1cd-41a2-9eaa-e0362511d7a0" containerName="registry-server" Jan 31 04:57:35 crc kubenswrapper[4832]: I0131 04:57:35.874171 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a04142-e1cd-41a2-9eaa-e0362511d7a0" containerName="registry-server" Jan 31 04:57:35 crc kubenswrapper[4832]: I0131 04:57:35.874385 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a04142-e1cd-41a2-9eaa-e0362511d7a0" containerName="registry-server" Jan 31 04:57:35 crc kubenswrapper[4832]: I0131 04:57:35.876048 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" Jan 31 04:57:35 crc kubenswrapper[4832]: I0131 04:57:35.877474 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm"] Jan 31 04:57:35 crc kubenswrapper[4832]: I0131 04:57:35.878453 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rv2qs" Jan 31 04:57:36 crc kubenswrapper[4832]: I0131 04:57:36.047080 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp52s\" (UniqueName: \"kubernetes.io/projected/9ddd6955-162a-4923-b841-4eb20989be7f-kube-api-access-jp52s\") pod \"fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm\" (UID: \"9ddd6955-162a-4923-b841-4eb20989be7f\") " pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" Jan 31 04:57:36 crc kubenswrapper[4832]: I0131 04:57:36.047139 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ddd6955-162a-4923-b841-4eb20989be7f-util\") pod \"fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm\" (UID: \"9ddd6955-162a-4923-b841-4eb20989be7f\") " pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" Jan 31 04:57:36 crc kubenswrapper[4832]: I0131 04:57:36.047221 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ddd6955-162a-4923-b841-4eb20989be7f-bundle\") pod \"fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm\" (UID: \"9ddd6955-162a-4923-b841-4eb20989be7f\") " pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" Jan 31 04:57:36 crc kubenswrapper[4832]: I0131 04:57:36.149300 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp52s\" (UniqueName: \"kubernetes.io/projected/9ddd6955-162a-4923-b841-4eb20989be7f-kube-api-access-jp52s\") pod \"fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm\" (UID: \"9ddd6955-162a-4923-b841-4eb20989be7f\") " pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" Jan 31 04:57:36 crc kubenswrapper[4832]: I0131 04:57:36.149413 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ddd6955-162a-4923-b841-4eb20989be7f-util\") pod \"fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm\" (UID: \"9ddd6955-162a-4923-b841-4eb20989be7f\") " pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" Jan 31 04:57:36 crc kubenswrapper[4832]: I0131 04:57:36.149518 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ddd6955-162a-4923-b841-4eb20989be7f-bundle\") pod \"fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm\" (UID: \"9ddd6955-162a-4923-b841-4eb20989be7f\") " pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" Jan 31 04:57:36 crc kubenswrapper[4832]: I0131 04:57:36.150794 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ddd6955-162a-4923-b841-4eb20989be7f-bundle\") pod \"fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm\" (UID: \"9ddd6955-162a-4923-b841-4eb20989be7f\") " pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" Jan 31 04:57:36 crc kubenswrapper[4832]: I0131 04:57:36.150796 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ddd6955-162a-4923-b841-4eb20989be7f-util\") pod \"fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm\" (UID: \"9ddd6955-162a-4923-b841-4eb20989be7f\") " pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" Jan 31 04:57:36 crc kubenswrapper[4832]: I0131 04:57:36.179773 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp52s\" (UniqueName: \"kubernetes.io/projected/9ddd6955-162a-4923-b841-4eb20989be7f-kube-api-access-jp52s\") pod \"fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm\" (UID: \"9ddd6955-162a-4923-b841-4eb20989be7f\") " pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" Jan 31 04:57:36 crc kubenswrapper[4832]: I0131 04:57:36.199932 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" Jan 31 04:57:36 crc kubenswrapper[4832]: I0131 04:57:36.716378 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm"] Jan 31 04:57:36 crc kubenswrapper[4832]: W0131 04:57:36.725715 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ddd6955_162a_4923_b841_4eb20989be7f.slice/crio-4f6637a9500c98493ccb5406d3f480fe3a737af4527c5dfe0e127034b543725b WatchSource:0}: Error finding container 4f6637a9500c98493ccb5406d3f480fe3a737af4527c5dfe0e127034b543725b: Status 404 returned error can't find the container with id 4f6637a9500c98493ccb5406d3f480fe3a737af4527c5dfe0e127034b543725b Jan 31 04:57:37 crc kubenswrapper[4832]: I0131 04:57:37.554676 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ddd6955-162a-4923-b841-4eb20989be7f" containerID="181fa7f7ebecf8d71c5f1d2693ed6af88d0a8980eef9b2b5b7d4ef062ce1f664" exitCode=0 Jan 31 04:57:37 crc kubenswrapper[4832]: I0131 04:57:37.554741 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" event={"ID":"9ddd6955-162a-4923-b841-4eb20989be7f","Type":"ContainerDied","Data":"181fa7f7ebecf8d71c5f1d2693ed6af88d0a8980eef9b2b5b7d4ef062ce1f664"} Jan 31 04:57:37 crc kubenswrapper[4832]: I0131 04:57:37.555172 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" event={"ID":"9ddd6955-162a-4923-b841-4eb20989be7f","Type":"ContainerStarted","Data":"4f6637a9500c98493ccb5406d3f480fe3a737af4527c5dfe0e127034b543725b"} Jan 31 04:57:39 crc kubenswrapper[4832]: I0131 04:57:39.581515 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ddd6955-162a-4923-b841-4eb20989be7f" containerID="a5800bafa6a88dfe3eb6fae6d85ad54209c24e74e91587888495f1542cd48efa" exitCode=0 Jan 31 04:57:39 crc kubenswrapper[4832]: I0131 04:57:39.581604 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" event={"ID":"9ddd6955-162a-4923-b841-4eb20989be7f","Type":"ContainerDied","Data":"a5800bafa6a88dfe3eb6fae6d85ad54209c24e74e91587888495f1542cd48efa"} Jan 31 04:57:40 crc kubenswrapper[4832]: I0131 04:57:40.593784 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ddd6955-162a-4923-b841-4eb20989be7f" containerID="80d01c4a38b9a5d4b3403c59a73f51935ad919163c4c47c72b4de1ed678ef079" exitCode=0 Jan 31 04:57:40 crc kubenswrapper[4832]: I0131 04:57:40.593837 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" event={"ID":"9ddd6955-162a-4923-b841-4eb20989be7f","Type":"ContainerDied","Data":"80d01c4a38b9a5d4b3403c59a73f51935ad919163c4c47c72b4de1ed678ef079"} Jan 31 04:57:41 crc kubenswrapper[4832]: I0131 04:57:41.903587 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" Jan 31 04:57:42 crc kubenswrapper[4832]: I0131 04:57:42.040275 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp52s\" (UniqueName: \"kubernetes.io/projected/9ddd6955-162a-4923-b841-4eb20989be7f-kube-api-access-jp52s\") pod \"9ddd6955-162a-4923-b841-4eb20989be7f\" (UID: \"9ddd6955-162a-4923-b841-4eb20989be7f\") " Jan 31 04:57:42 crc kubenswrapper[4832]: I0131 04:57:42.040373 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ddd6955-162a-4923-b841-4eb20989be7f-util\") pod \"9ddd6955-162a-4923-b841-4eb20989be7f\" (UID: \"9ddd6955-162a-4923-b841-4eb20989be7f\") " Jan 31 04:57:42 crc kubenswrapper[4832]: I0131 04:57:42.040458 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ddd6955-162a-4923-b841-4eb20989be7f-bundle\") pod \"9ddd6955-162a-4923-b841-4eb20989be7f\" (UID: \"9ddd6955-162a-4923-b841-4eb20989be7f\") " Jan 31 04:57:42 crc kubenswrapper[4832]: I0131 04:57:42.042051 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ddd6955-162a-4923-b841-4eb20989be7f-bundle" (OuterVolumeSpecName: "bundle") pod "9ddd6955-162a-4923-b841-4eb20989be7f" (UID: "9ddd6955-162a-4923-b841-4eb20989be7f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:57:42 crc kubenswrapper[4832]: I0131 04:57:42.051836 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ddd6955-162a-4923-b841-4eb20989be7f-kube-api-access-jp52s" (OuterVolumeSpecName: "kube-api-access-jp52s") pod "9ddd6955-162a-4923-b841-4eb20989be7f" (UID: "9ddd6955-162a-4923-b841-4eb20989be7f"). InnerVolumeSpecName "kube-api-access-jp52s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:57:42 crc kubenswrapper[4832]: I0131 04:57:42.056263 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ddd6955-162a-4923-b841-4eb20989be7f-util" (OuterVolumeSpecName: "util") pod "9ddd6955-162a-4923-b841-4eb20989be7f" (UID: "9ddd6955-162a-4923-b841-4eb20989be7f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:57:42 crc kubenswrapper[4832]: I0131 04:57:42.142651 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp52s\" (UniqueName: \"kubernetes.io/projected/9ddd6955-162a-4923-b841-4eb20989be7f-kube-api-access-jp52s\") on node \"crc\" DevicePath \"\"" Jan 31 04:57:42 crc kubenswrapper[4832]: I0131 04:57:42.142707 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ddd6955-162a-4923-b841-4eb20989be7f-util\") on node \"crc\" DevicePath \"\"" Jan 31 04:57:42 crc kubenswrapper[4832]: I0131 04:57:42.142729 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ddd6955-162a-4923-b841-4eb20989be7f-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:57:42 crc kubenswrapper[4832]: I0131 04:57:42.613771 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" event={"ID":"9ddd6955-162a-4923-b841-4eb20989be7f","Type":"ContainerDied","Data":"4f6637a9500c98493ccb5406d3f480fe3a737af4527c5dfe0e127034b543725b"} Jan 31 04:57:42 crc kubenswrapper[4832]: I0131 04:57:42.613846 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f6637a9500c98493ccb5406d3f480fe3a737af4527c5dfe0e127034b543725b" Jan 31 04:57:42 crc kubenswrapper[4832]: I0131 04:57:42.613884 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm" Jan 31 04:57:48 crc kubenswrapper[4832]: I0131 04:57:48.025556 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cbc497cdb-zc65j"] Jan 31 04:57:48 crc kubenswrapper[4832]: E0131 04:57:48.026514 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ddd6955-162a-4923-b841-4eb20989be7f" containerName="pull" Jan 31 04:57:48 crc kubenswrapper[4832]: I0131 04:57:48.026535 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ddd6955-162a-4923-b841-4eb20989be7f" containerName="pull" Jan 31 04:57:48 crc kubenswrapper[4832]: E0131 04:57:48.026595 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ddd6955-162a-4923-b841-4eb20989be7f" containerName="util" Jan 31 04:57:48 crc kubenswrapper[4832]: I0131 04:57:48.026607 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ddd6955-162a-4923-b841-4eb20989be7f" containerName="util" Jan 31 04:57:48 crc kubenswrapper[4832]: E0131 04:57:48.026620 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ddd6955-162a-4923-b841-4eb20989be7f" containerName="extract" Jan 31 04:57:48 crc kubenswrapper[4832]: I0131 04:57:48.026632 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ddd6955-162a-4923-b841-4eb20989be7f" containerName="extract" Jan 31 04:57:48 crc kubenswrapper[4832]: I0131 04:57:48.026840 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ddd6955-162a-4923-b841-4eb20989be7f" containerName="extract" Jan 31 04:57:48 crc kubenswrapper[4832]: I0131 04:57:48.027499 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6cbc497cdb-zc65j" Jan 31 04:57:48 crc kubenswrapper[4832]: I0131 04:57:48.038998 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-cpmpk" Jan 31 04:57:48 crc kubenswrapper[4832]: I0131 04:57:48.054600 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cbc497cdb-zc65j"] Jan 31 04:57:48 crc kubenswrapper[4832]: I0131 04:57:48.138075 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79xrd\" (UniqueName: \"kubernetes.io/projected/f76fb23f-871d-459c-b196-8e33703f7e44-kube-api-access-79xrd\") pod \"openstack-operator-controller-init-6cbc497cdb-zc65j\" (UID: \"f76fb23f-871d-459c-b196-8e33703f7e44\") " pod="openstack-operators/openstack-operator-controller-init-6cbc497cdb-zc65j" Jan 31 04:57:48 crc kubenswrapper[4832]: I0131 04:57:48.239593 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79xrd\" (UniqueName: \"kubernetes.io/projected/f76fb23f-871d-459c-b196-8e33703f7e44-kube-api-access-79xrd\") pod \"openstack-operator-controller-init-6cbc497cdb-zc65j\" (UID: \"f76fb23f-871d-459c-b196-8e33703f7e44\") " pod="openstack-operators/openstack-operator-controller-init-6cbc497cdb-zc65j" Jan 31 04:57:48 crc kubenswrapper[4832]: I0131 04:57:48.258911 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79xrd\" (UniqueName: \"kubernetes.io/projected/f76fb23f-871d-459c-b196-8e33703f7e44-kube-api-access-79xrd\") pod \"openstack-operator-controller-init-6cbc497cdb-zc65j\" (UID: \"f76fb23f-871d-459c-b196-8e33703f7e44\") " pod="openstack-operators/openstack-operator-controller-init-6cbc497cdb-zc65j" Jan 31 04:57:48 crc kubenswrapper[4832]: I0131 04:57:48.347417 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6cbc497cdb-zc65j" Jan 31 04:57:48 crc kubenswrapper[4832]: I0131 04:57:48.540095 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:57:48 crc kubenswrapper[4832]: I0131 04:57:48.540387 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:57:48 crc kubenswrapper[4832]: I0131 04:57:48.841712 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cbc497cdb-zc65j"] Jan 31 04:57:49 crc kubenswrapper[4832]: I0131 04:57:49.677875 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6cbc497cdb-zc65j" event={"ID":"f76fb23f-871d-459c-b196-8e33703f7e44","Type":"ContainerStarted","Data":"af72813c696d032442466ed4a017e8834f559a083ad27f4aa5552e69fcb15599"} Jan 31 04:57:51 crc kubenswrapper[4832]: I0131 04:57:51.265365 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q62g2"] Jan 31 04:57:51 crc kubenswrapper[4832]: I0131 04:57:51.267768 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:57:51 crc kubenswrapper[4832]: I0131 04:57:51.273106 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q62g2"] Jan 31 04:57:51 crc kubenswrapper[4832]: I0131 04:57:51.393125 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/961f4e37-688d-4cb5-a9f7-181d73151d8c-catalog-content\") pod \"community-operators-q62g2\" (UID: \"961f4e37-688d-4cb5-a9f7-181d73151d8c\") " pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:57:51 crc kubenswrapper[4832]: I0131 04:57:51.393238 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqnxd\" (UniqueName: \"kubernetes.io/projected/961f4e37-688d-4cb5-a9f7-181d73151d8c-kube-api-access-lqnxd\") pod \"community-operators-q62g2\" (UID: \"961f4e37-688d-4cb5-a9f7-181d73151d8c\") " pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:57:51 crc kubenswrapper[4832]: I0131 04:57:51.393285 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/961f4e37-688d-4cb5-a9f7-181d73151d8c-utilities\") pod \"community-operators-q62g2\" (UID: \"961f4e37-688d-4cb5-a9f7-181d73151d8c\") " pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:57:51 crc kubenswrapper[4832]: I0131 04:57:51.499519 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/961f4e37-688d-4cb5-a9f7-181d73151d8c-catalog-content\") pod \"community-operators-q62g2\" (UID: \"961f4e37-688d-4cb5-a9f7-181d73151d8c\") " pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:57:51 crc kubenswrapper[4832]: I0131 04:57:51.499689 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqnxd\" (UniqueName: \"kubernetes.io/projected/961f4e37-688d-4cb5-a9f7-181d73151d8c-kube-api-access-lqnxd\") pod \"community-operators-q62g2\" (UID: \"961f4e37-688d-4cb5-a9f7-181d73151d8c\") " pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:57:51 crc kubenswrapper[4832]: I0131 04:57:51.499733 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/961f4e37-688d-4cb5-a9f7-181d73151d8c-utilities\") pod \"community-operators-q62g2\" (UID: \"961f4e37-688d-4cb5-a9f7-181d73151d8c\") " pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:57:51 crc kubenswrapper[4832]: I0131 04:57:51.500150 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/961f4e37-688d-4cb5-a9f7-181d73151d8c-catalog-content\") pod \"community-operators-q62g2\" (UID: \"961f4e37-688d-4cb5-a9f7-181d73151d8c\") " pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:57:51 crc kubenswrapper[4832]: I0131 04:57:51.500222 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/961f4e37-688d-4cb5-a9f7-181d73151d8c-utilities\") pod \"community-operators-q62g2\" (UID: \"961f4e37-688d-4cb5-a9f7-181d73151d8c\") " pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:57:51 crc kubenswrapper[4832]: I0131 04:57:51.526166 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqnxd\" (UniqueName: \"kubernetes.io/projected/961f4e37-688d-4cb5-a9f7-181d73151d8c-kube-api-access-lqnxd\") pod \"community-operators-q62g2\" (UID: \"961f4e37-688d-4cb5-a9f7-181d73151d8c\") " pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:57:51 crc kubenswrapper[4832]: I0131 04:57:51.607910 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:57:53 crc kubenswrapper[4832]: I0131 04:57:53.542209 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q62g2"] Jan 31 04:57:53 crc kubenswrapper[4832]: I0131 04:57:53.720393 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q62g2" event={"ID":"961f4e37-688d-4cb5-a9f7-181d73151d8c","Type":"ContainerStarted","Data":"6a8bc7407b3b1eec066d7484285b3449db6b1c9ad30b0ec9c88e6015b4ece1cf"} Jan 31 04:57:53 crc kubenswrapper[4832]: I0131 04:57:53.720468 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q62g2" event={"ID":"961f4e37-688d-4cb5-a9f7-181d73151d8c","Type":"ContainerStarted","Data":"e65e43b9237735de216135c0b20e50599e0b3af08811cb402bd0286accca25b4"} Jan 31 04:57:53 crc kubenswrapper[4832]: I0131 04:57:53.722646 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6cbc497cdb-zc65j" event={"ID":"f76fb23f-871d-459c-b196-8e33703f7e44","Type":"ContainerStarted","Data":"37c49946fa7b65b13fac50e4b0c049c9fe36df660509cf2e3ae6116fdfca6d56"} Jan 31 04:57:53 crc kubenswrapper[4832]: I0131 04:57:53.723257 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6cbc497cdb-zc65j" Jan 31 04:57:53 crc kubenswrapper[4832]: I0131 04:57:53.779509 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6cbc497cdb-zc65j" podStartSLOduration=2.258785768 podStartE2EDuration="6.779490503s" podCreationTimestamp="2026-01-31 04:57:47 +0000 UTC" firstStartedPulling="2026-01-31 04:57:48.862230696 +0000 UTC m=+877.811052381" lastFinishedPulling="2026-01-31 04:57:53.382935431 +0000 UTC m=+882.331757116" observedRunningTime="2026-01-31 04:57:53.777814031 +0000 UTC m=+882.726635716" watchObservedRunningTime="2026-01-31 04:57:53.779490503 +0000 UTC m=+882.728312188" Jan 31 04:57:54 crc kubenswrapper[4832]: I0131 04:57:54.731825 4832 generic.go:334] "Generic (PLEG): container finished" podID="961f4e37-688d-4cb5-a9f7-181d73151d8c" containerID="6a8bc7407b3b1eec066d7484285b3449db6b1c9ad30b0ec9c88e6015b4ece1cf" exitCode=0 Jan 31 04:57:54 crc kubenswrapper[4832]: I0131 04:57:54.731908 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q62g2" event={"ID":"961f4e37-688d-4cb5-a9f7-181d73151d8c","Type":"ContainerDied","Data":"6a8bc7407b3b1eec066d7484285b3449db6b1c9ad30b0ec9c88e6015b4ece1cf"} Jan 31 04:57:55 crc kubenswrapper[4832]: I0131 04:57:55.742408 4832 generic.go:334] "Generic (PLEG): container finished" podID="961f4e37-688d-4cb5-a9f7-181d73151d8c" containerID="6f7711aaef2f6ff2047e548f6ad8308db181f8daea479b3cce60e8f7fd4f852a" exitCode=0 Jan 31 04:57:55 crc kubenswrapper[4832]: I0131 04:57:55.742469 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q62g2" event={"ID":"961f4e37-688d-4cb5-a9f7-181d73151d8c","Type":"ContainerDied","Data":"6f7711aaef2f6ff2047e548f6ad8308db181f8daea479b3cce60e8f7fd4f852a"} Jan 31 04:57:56 crc kubenswrapper[4832]: I0131 04:57:56.755940 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q62g2" event={"ID":"961f4e37-688d-4cb5-a9f7-181d73151d8c","Type":"ContainerStarted","Data":"6c521313756786c03189803a5f30166684d0c22f54b595adb350b9a89522f58f"} Jan 31 04:57:56 crc kubenswrapper[4832]: I0131 04:57:56.782078 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q62g2" podStartSLOduration=4.408530759 podStartE2EDuration="5.782042327s" podCreationTimestamp="2026-01-31 04:57:51 +0000 UTC" firstStartedPulling="2026-01-31 04:57:54.734396013 +0000 UTC m=+883.683217728" lastFinishedPulling="2026-01-31 04:57:56.107907611 +0000 UTC m=+885.056729296" observedRunningTime="2026-01-31 04:57:56.779005873 +0000 UTC m=+885.727827558" watchObservedRunningTime="2026-01-31 04:57:56.782042327 +0000 UTC m=+885.730864022" Jan 31 04:57:58 crc kubenswrapper[4832]: I0131 04:57:58.350025 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6cbc497cdb-zc65j" Jan 31 04:58:01 crc kubenswrapper[4832]: I0131 04:58:01.608493 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:58:01 crc kubenswrapper[4832]: I0131 04:58:01.608947 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:58:01 crc kubenswrapper[4832]: I0131 04:58:01.665348 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:58:01 crc kubenswrapper[4832]: I0131 04:58:01.838923 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:58:01 crc kubenswrapper[4832]: I0131 04:58:01.913018 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q62g2"] Jan 31 04:58:03 crc kubenswrapper[4832]: I0131 04:58:03.799894 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q62g2" podUID="961f4e37-688d-4cb5-a9f7-181d73151d8c" containerName="registry-server" containerID="cri-o://6c521313756786c03189803a5f30166684d0c22f54b595adb350b9a89522f58f" gracePeriod=2 Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.192300 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.296657 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqnxd\" (UniqueName: \"kubernetes.io/projected/961f4e37-688d-4cb5-a9f7-181d73151d8c-kube-api-access-lqnxd\") pod \"961f4e37-688d-4cb5-a9f7-181d73151d8c\" (UID: \"961f4e37-688d-4cb5-a9f7-181d73151d8c\") " Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.296856 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/961f4e37-688d-4cb5-a9f7-181d73151d8c-utilities\") pod \"961f4e37-688d-4cb5-a9f7-181d73151d8c\" (UID: \"961f4e37-688d-4cb5-a9f7-181d73151d8c\") " Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.296906 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/961f4e37-688d-4cb5-a9f7-181d73151d8c-catalog-content\") pod \"961f4e37-688d-4cb5-a9f7-181d73151d8c\" (UID: \"961f4e37-688d-4cb5-a9f7-181d73151d8c\") " Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.298028 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/961f4e37-688d-4cb5-a9f7-181d73151d8c-utilities" (OuterVolumeSpecName: "utilities") pod "961f4e37-688d-4cb5-a9f7-181d73151d8c" (UID: "961f4e37-688d-4cb5-a9f7-181d73151d8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.303110 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/961f4e37-688d-4cb5-a9f7-181d73151d8c-kube-api-access-lqnxd" (OuterVolumeSpecName: "kube-api-access-lqnxd") pod "961f4e37-688d-4cb5-a9f7-181d73151d8c" (UID: "961f4e37-688d-4cb5-a9f7-181d73151d8c"). InnerVolumeSpecName "kube-api-access-lqnxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.365692 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/961f4e37-688d-4cb5-a9f7-181d73151d8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "961f4e37-688d-4cb5-a9f7-181d73151d8c" (UID: "961f4e37-688d-4cb5-a9f7-181d73151d8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.398604 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/961f4e37-688d-4cb5-a9f7-181d73151d8c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.398638 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/961f4e37-688d-4cb5-a9f7-181d73151d8c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.398651 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqnxd\" (UniqueName: \"kubernetes.io/projected/961f4e37-688d-4cb5-a9f7-181d73151d8c-kube-api-access-lqnxd\") on node \"crc\" DevicePath \"\"" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.810196 4832 generic.go:334] "Generic (PLEG): container finished" podID="961f4e37-688d-4cb5-a9f7-181d73151d8c" containerID="6c521313756786c03189803a5f30166684d0c22f54b595adb350b9a89522f58f" exitCode=0 Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.810279 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q62g2" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.810301 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q62g2" event={"ID":"961f4e37-688d-4cb5-a9f7-181d73151d8c","Type":"ContainerDied","Data":"6c521313756786c03189803a5f30166684d0c22f54b595adb350b9a89522f58f"} Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.810643 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q62g2" event={"ID":"961f4e37-688d-4cb5-a9f7-181d73151d8c","Type":"ContainerDied","Data":"e65e43b9237735de216135c0b20e50599e0b3af08811cb402bd0286accca25b4"} Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.810672 4832 scope.go:117] "RemoveContainer" containerID="6c521313756786c03189803a5f30166684d0c22f54b595adb350b9a89522f58f" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.834357 4832 scope.go:117] "RemoveContainer" containerID="6f7711aaef2f6ff2047e548f6ad8308db181f8daea479b3cce60e8f7fd4f852a" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.853591 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q62g2"] Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.862670 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q62g2"] Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.870406 4832 scope.go:117] "RemoveContainer" containerID="6a8bc7407b3b1eec066d7484285b3449db6b1c9ad30b0ec9c88e6015b4ece1cf" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.887456 4832 scope.go:117] "RemoveContainer" containerID="6c521313756786c03189803a5f30166684d0c22f54b595adb350b9a89522f58f" Jan 31 04:58:04 crc kubenswrapper[4832]: E0131 04:58:04.887965 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c521313756786c03189803a5f30166684d0c22f54b595adb350b9a89522f58f\": container with ID starting with 6c521313756786c03189803a5f30166684d0c22f54b595adb350b9a89522f58f not found: ID does not exist" containerID="6c521313756786c03189803a5f30166684d0c22f54b595adb350b9a89522f58f" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.888005 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c521313756786c03189803a5f30166684d0c22f54b595adb350b9a89522f58f"} err="failed to get container status \"6c521313756786c03189803a5f30166684d0c22f54b595adb350b9a89522f58f\": rpc error: code = NotFound desc = could not find container \"6c521313756786c03189803a5f30166684d0c22f54b595adb350b9a89522f58f\": container with ID starting with 6c521313756786c03189803a5f30166684d0c22f54b595adb350b9a89522f58f not found: ID does not exist" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.888030 4832 scope.go:117] "RemoveContainer" containerID="6f7711aaef2f6ff2047e548f6ad8308db181f8daea479b3cce60e8f7fd4f852a" Jan 31 04:58:04 crc kubenswrapper[4832]: E0131 04:58:04.888438 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f7711aaef2f6ff2047e548f6ad8308db181f8daea479b3cce60e8f7fd4f852a\": container with ID starting with 6f7711aaef2f6ff2047e548f6ad8308db181f8daea479b3cce60e8f7fd4f852a not found: ID does not exist" containerID="6f7711aaef2f6ff2047e548f6ad8308db181f8daea479b3cce60e8f7fd4f852a" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.888481 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f7711aaef2f6ff2047e548f6ad8308db181f8daea479b3cce60e8f7fd4f852a"} err="failed to get container status \"6f7711aaef2f6ff2047e548f6ad8308db181f8daea479b3cce60e8f7fd4f852a\": rpc error: code = NotFound desc = could not find container \"6f7711aaef2f6ff2047e548f6ad8308db181f8daea479b3cce60e8f7fd4f852a\": container with ID starting with 6f7711aaef2f6ff2047e548f6ad8308db181f8daea479b3cce60e8f7fd4f852a not found: ID does not exist" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.888507 4832 scope.go:117] "RemoveContainer" containerID="6a8bc7407b3b1eec066d7484285b3449db6b1c9ad30b0ec9c88e6015b4ece1cf" Jan 31 04:58:04 crc kubenswrapper[4832]: E0131 04:58:04.889057 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a8bc7407b3b1eec066d7484285b3449db6b1c9ad30b0ec9c88e6015b4ece1cf\": container with ID starting with 6a8bc7407b3b1eec066d7484285b3449db6b1c9ad30b0ec9c88e6015b4ece1cf not found: ID does not exist" containerID="6a8bc7407b3b1eec066d7484285b3449db6b1c9ad30b0ec9c88e6015b4ece1cf" Jan 31 04:58:04 crc kubenswrapper[4832]: I0131 04:58:04.889097 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a8bc7407b3b1eec066d7484285b3449db6b1c9ad30b0ec9c88e6015b4ece1cf"} err="failed to get container status \"6a8bc7407b3b1eec066d7484285b3449db6b1c9ad30b0ec9c88e6015b4ece1cf\": rpc error: code = NotFound desc = could not find container \"6a8bc7407b3b1eec066d7484285b3449db6b1c9ad30b0ec9c88e6015b4ece1cf\": container with ID starting with 6a8bc7407b3b1eec066d7484285b3449db6b1c9ad30b0ec9c88e6015b4ece1cf not found: ID does not exist" Jan 31 04:58:05 crc kubenswrapper[4832]: I0131 04:58:05.874970 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="961f4e37-688d-4cb5-a9f7-181d73151d8c" path="/var/lib/kubelet/pods/961f4e37-688d-4cb5-a9f7-181d73151d8c/volumes" Jan 31 04:58:18 crc kubenswrapper[4832]: I0131 04:58:18.540924 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:58:18 crc kubenswrapper[4832]: I0131 04:58:18.541684 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.155184 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zr7l4"] Jan 31 04:58:36 crc kubenswrapper[4832]: E0131 04:58:36.156217 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="961f4e37-688d-4cb5-a9f7-181d73151d8c" containerName="registry-server" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.156237 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="961f4e37-688d-4cb5-a9f7-181d73151d8c" containerName="registry-server" Jan 31 04:58:36 crc kubenswrapper[4832]: E0131 04:58:36.156267 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="961f4e37-688d-4cb5-a9f7-181d73151d8c" containerName="extract-utilities" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.156279 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="961f4e37-688d-4cb5-a9f7-181d73151d8c" containerName="extract-utilities" Jan 31 04:58:36 crc kubenswrapper[4832]: E0131 04:58:36.156314 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="961f4e37-688d-4cb5-a9f7-181d73151d8c" containerName="extract-content" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.156327 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="961f4e37-688d-4cb5-a9f7-181d73151d8c" containerName="extract-content" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.156510 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="961f4e37-688d-4cb5-a9f7-181d73151d8c" containerName="registry-server" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.157114 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zr7l4" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.160241 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-xmspn" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.173005 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-5hl82"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.174146 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5hl82" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.175778 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-5xsjm" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.178864 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zr7l4"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.190490 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-gc6zx"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.191534 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-gc6zx" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.199846 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2fmt\" (UniqueName: \"kubernetes.io/projected/0e40cd6e-2cdb-4a24-82d8-d27fc4feb14d-kube-api-access-v2fmt\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-zr7l4\" (UID: \"0e40cd6e-2cdb-4a24-82d8-d27fc4feb14d\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zr7l4" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.199940 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxvvz\" (UniqueName: \"kubernetes.io/projected/c20a8bcb-8431-4318-9e7d-8f4ccaddfa8b-kube-api-access-cxvvz\") pod \"cinder-operator-controller-manager-8d874c8fc-5hl82\" (UID: \"c20a8bcb-8431-4318-9e7d-8f4ccaddfa8b\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5hl82" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.200606 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tzfg7" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.202706 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-5hl82"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.250674 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-gc6zx"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.294089 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-pnktz"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.301663 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxvvz\" (UniqueName: \"kubernetes.io/projected/c20a8bcb-8431-4318-9e7d-8f4ccaddfa8b-kube-api-access-cxvvz\") pod \"cinder-operator-controller-manager-8d874c8fc-5hl82\" (UID: \"c20a8bcb-8431-4318-9e7d-8f4ccaddfa8b\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5hl82" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.301807 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5kx6\" (UniqueName: \"kubernetes.io/projected/805b9f0e-cb57-4b71-b199-b8ee289af169-kube-api-access-c5kx6\") pod \"designate-operator-controller-manager-6d9697b7f4-gc6zx\" (UID: \"805b9f0e-cb57-4b71-b199-b8ee289af169\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-gc6zx" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.301866 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2fmt\" (UniqueName: \"kubernetes.io/projected/0e40cd6e-2cdb-4a24-82d8-d27fc4feb14d-kube-api-access-v2fmt\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-zr7l4\" (UID: \"0e40cd6e-2cdb-4a24-82d8-d27fc4feb14d\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zr7l4" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.315049 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pnktz" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.324509 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-4npfn" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.335733 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxvvz\" (UniqueName: \"kubernetes.io/projected/c20a8bcb-8431-4318-9e7d-8f4ccaddfa8b-kube-api-access-cxvvz\") pod \"cinder-operator-controller-manager-8d874c8fc-5hl82\" (UID: \"c20a8bcb-8431-4318-9e7d-8f4ccaddfa8b\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5hl82" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.338036 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-pnktz"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.345600 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-2hr2t"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.346698 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2hr2t" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.348413 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2fmt\" (UniqueName: \"kubernetes.io/projected/0e40cd6e-2cdb-4a24-82d8-d27fc4feb14d-kube-api-access-v2fmt\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-zr7l4\" (UID: \"0e40cd6e-2cdb-4a24-82d8-d27fc4feb14d\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zr7l4" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.349666 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-g6gc2" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.368271 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-2hr2t"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.373642 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-4m8x2"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.375004 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4m8x2" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.377281 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-mvcqr" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.392260 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-4m8x2"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.406739 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.413047 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrkzw\" (UniqueName: \"kubernetes.io/projected/b23fff55-653f-417a-9f77-d7b115586ade-kube-api-access-hrkzw\") pod \"horizon-operator-controller-manager-5fb775575f-4m8x2\" (UID: \"b23fff55-653f-417a-9f77-d7b115586ade\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4m8x2" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.413114 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5kx6\" (UniqueName: \"kubernetes.io/projected/805b9f0e-cb57-4b71-b199-b8ee289af169-kube-api-access-c5kx6\") pod \"designate-operator-controller-manager-6d9697b7f4-gc6zx\" (UID: \"805b9f0e-cb57-4b71-b199-b8ee289af169\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-gc6zx" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.413145 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kq9v\" (UniqueName: \"kubernetes.io/projected/9d690743-c300-46c9-83d7-c416ba5aff83-kube-api-access-6kq9v\") pod \"glance-operator-controller-manager-8886f4c47-pnktz\" (UID: \"9d690743-c300-46c9-83d7-c416ba5aff83\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pnktz" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.413202 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r44s\" (UniqueName: \"kubernetes.io/projected/141c81b8-f2f6-4f96-9ac7-83305f4eabd0-kube-api-access-7r44s\") pod \"heat-operator-controller-manager-69d6db494d-2hr2t\" (UID: \"141c81b8-f2f6-4f96-9ac7-83305f4eabd0\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2hr2t" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.426614 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-jl4p7"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.427356 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-jl4p7" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.427978 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.430143 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.435403 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-m8hgj" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.437321 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.437628 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-v5nmv" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.448653 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-4445f"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.449865 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-4445f" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.454779 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5kx6\" (UniqueName: \"kubernetes.io/projected/805b9f0e-cb57-4b71-b199-b8ee289af169-kube-api-access-c5kx6\") pod \"designate-operator-controller-manager-6d9697b7f4-gc6zx\" (UID: \"805b9f0e-cb57-4b71-b199-b8ee289af169\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-gc6zx" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.459189 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5n4jd" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.463707 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-4445f"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.472956 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-jl4p7"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.479199 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zr7l4" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.496375 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-jtr9c"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.497783 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-jtr9c" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.503324 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5hl82" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.515085 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-hcr7f" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.516105 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-gc6zx" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.516518 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrkzw\" (UniqueName: \"kubernetes.io/projected/b23fff55-653f-417a-9f77-d7b115586ade-kube-api-access-hrkzw\") pod \"horizon-operator-controller-manager-5fb775575f-4m8x2\" (UID: \"b23fff55-653f-417a-9f77-d7b115586ade\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4m8x2" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.516604 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b7c8\" (UniqueName: \"kubernetes.io/projected/830a5967-5b56-4c70-8940-ef90cd945807-kube-api-access-4b7c8\") pod \"ironic-operator-controller-manager-5f4b8bd54d-4445f\" (UID: \"830a5967-5b56-4c70-8940-ef90cd945807\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-4445f" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.516657 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kq9v\" (UniqueName: \"kubernetes.io/projected/9d690743-c300-46c9-83d7-c416ba5aff83-kube-api-access-6kq9v\") pod \"glance-operator-controller-manager-8886f4c47-pnktz\" (UID: \"9d690743-c300-46c9-83d7-c416ba5aff83\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pnktz" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.516696 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk8h5\" (UniqueName: \"kubernetes.io/projected/0ac9e56c-c068-4fec-98d8-8d44e1fa6ccd-kube-api-access-gk8h5\") pod \"keystone-operator-controller-manager-84f48565d4-jl4p7\" (UID: \"0ac9e56c-c068-4fec-98d8-8d44e1fa6ccd\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-jl4p7" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.516726 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert\") pod \"infra-operator-controller-manager-57997b5fcd-hjsbn\" (UID: \"48118fb9-dcf4-45f5-8096-c558f980eab4\") " pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.516754 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzft8\" (UniqueName: \"kubernetes.io/projected/ea8d7014-c1f0-4b4f-aa01-7865124c3187-kube-api-access-pzft8\") pod \"manila-operator-controller-manager-7dd968899f-jtr9c\" (UID: \"ea8d7014-c1f0-4b4f-aa01-7865124c3187\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-jtr9c" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.516779 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tcmd\" (UniqueName: \"kubernetes.io/projected/48118fb9-dcf4-45f5-8096-c558f980eab4-kube-api-access-2tcmd\") pod \"infra-operator-controller-manager-57997b5fcd-hjsbn\" (UID: \"48118fb9-dcf4-45f5-8096-c558f980eab4\") " pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.516830 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r44s\" (UniqueName: \"kubernetes.io/projected/141c81b8-f2f6-4f96-9ac7-83305f4eabd0-kube-api-access-7r44s\") pod \"heat-operator-controller-manager-69d6db494d-2hr2t\" (UID: \"141c81b8-f2f6-4f96-9ac7-83305f4eabd0\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2hr2t" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.542759 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-fdxks"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.544657 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fdxks" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.555544 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r44s\" (UniqueName: \"kubernetes.io/projected/141c81b8-f2f6-4f96-9ac7-83305f4eabd0-kube-api-access-7r44s\") pod \"heat-operator-controller-manager-69d6db494d-2hr2t\" (UID: \"141c81b8-f2f6-4f96-9ac7-83305f4eabd0\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2hr2t" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.556112 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kq9v\" (UniqueName: \"kubernetes.io/projected/9d690743-c300-46c9-83d7-c416ba5aff83-kube-api-access-6kq9v\") pod \"glance-operator-controller-manager-8886f4c47-pnktz\" (UID: \"9d690743-c300-46c9-83d7-c416ba5aff83\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pnktz" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.562020 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-fddtx" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.562265 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrkzw\" (UniqueName: \"kubernetes.io/projected/b23fff55-653f-417a-9f77-d7b115586ade-kube-api-access-hrkzw\") pod \"horizon-operator-controller-manager-5fb775575f-4m8x2\" (UID: \"b23fff55-653f-417a-9f77-d7b115586ade\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4m8x2" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.575783 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-jtr9c"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.612462 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-c9pxb"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.613389 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-c9pxb" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.619856 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-kr6sc" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.620910 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk8h5\" (UniqueName: \"kubernetes.io/projected/0ac9e56c-c068-4fec-98d8-8d44e1fa6ccd-kube-api-access-gk8h5\") pod \"keystone-operator-controller-manager-84f48565d4-jl4p7\" (UID: \"0ac9e56c-c068-4fec-98d8-8d44e1fa6ccd\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-jl4p7" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.620945 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert\") pod \"infra-operator-controller-manager-57997b5fcd-hjsbn\" (UID: \"48118fb9-dcf4-45f5-8096-c558f980eab4\") " pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.620968 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzft8\" (UniqueName: \"kubernetes.io/projected/ea8d7014-c1f0-4b4f-aa01-7865124c3187-kube-api-access-pzft8\") pod \"manila-operator-controller-manager-7dd968899f-jtr9c\" (UID: \"ea8d7014-c1f0-4b4f-aa01-7865124c3187\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-jtr9c" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.620998 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tcmd\" (UniqueName: \"kubernetes.io/projected/48118fb9-dcf4-45f5-8096-c558f980eab4-kube-api-access-2tcmd\") pod \"infra-operator-controller-manager-57997b5fcd-hjsbn\" (UID: \"48118fb9-dcf4-45f5-8096-c558f980eab4\") " pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.621044 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h77r\" (UniqueName: \"kubernetes.io/projected/7c8b7b2a-0a2f-4a69-b538-6afa0fb7e138-kube-api-access-7h77r\") pod \"mariadb-operator-controller-manager-67bf948998-fdxks\" (UID: \"7c8b7b2a-0a2f-4a69-b538-6afa0fb7e138\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fdxks" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.621081 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b7c8\" (UniqueName: \"kubernetes.io/projected/830a5967-5b56-4c70-8940-ef90cd945807-kube-api-access-4b7c8\") pod \"ironic-operator-controller-manager-5f4b8bd54d-4445f\" (UID: \"830a5967-5b56-4c70-8940-ef90cd945807\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-4445f" Jan 31 04:58:36 crc kubenswrapper[4832]: E0131 04:58:36.621460 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:58:36 crc kubenswrapper[4832]: E0131 04:58:36.621499 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert podName:48118fb9-dcf4-45f5-8096-c558f980eab4 nodeName:}" failed. No retries permitted until 2026-01-31 04:58:37.121484046 +0000 UTC m=+926.070305731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert") pod "infra-operator-controller-manager-57997b5fcd-hjsbn" (UID: "48118fb9-dcf4-45f5-8096-c558f980eab4") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.634423 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-fdxks"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.651363 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-jcrbt"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.651672 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tcmd\" (UniqueName: \"kubernetes.io/projected/48118fb9-dcf4-45f5-8096-c558f980eab4-kube-api-access-2tcmd\") pod \"infra-operator-controller-manager-57997b5fcd-hjsbn\" (UID: \"48118fb9-dcf4-45f5-8096-c558f980eab4\") " pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.652434 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jcrbt" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.653447 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b7c8\" (UniqueName: \"kubernetes.io/projected/830a5967-5b56-4c70-8940-ef90cd945807-kube-api-access-4b7c8\") pod \"ironic-operator-controller-manager-5f4b8bd54d-4445f\" (UID: \"830a5967-5b56-4c70-8940-ef90cd945807\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-4445f" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.662621 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-8xwhl"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.674163 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8xwhl" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.677882 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk8h5\" (UniqueName: \"kubernetes.io/projected/0ac9e56c-c068-4fec-98d8-8d44e1fa6ccd-kube-api-access-gk8h5\") pod \"keystone-operator-controller-manager-84f48565d4-jl4p7\" (UID: \"0ac9e56c-c068-4fec-98d8-8d44e1fa6ccd\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-jl4p7" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.678134 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzft8\" (UniqueName: \"kubernetes.io/projected/ea8d7014-c1f0-4b4f-aa01-7865124c3187-kube-api-access-pzft8\") pod \"manila-operator-controller-manager-7dd968899f-jtr9c\" (UID: \"ea8d7014-c1f0-4b4f-aa01-7865124c3187\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-jtr9c" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.678438 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-l8245" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.686776 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pnktz" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.692686 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rq29s" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.699023 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-c9pxb"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.706846 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-jcrbt"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.709321 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2hr2t" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.715694 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-8xwhl"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.722807 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf4r9\" (UniqueName: \"kubernetes.io/projected/e7569d69-3ad6-4127-a49f-a16706a35099-kube-api-access-wf4r9\") pod \"neutron-operator-controller-manager-585dbc889-c9pxb\" (UID: \"e7569d69-3ad6-4127-a49f-a16706a35099\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-c9pxb" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.722860 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h77r\" (UniqueName: \"kubernetes.io/projected/7c8b7b2a-0a2f-4a69-b538-6afa0fb7e138-kube-api-access-7h77r\") pod \"mariadb-operator-controller-manager-67bf948998-fdxks\" (UID: \"7c8b7b2a-0a2f-4a69-b538-6afa0fb7e138\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fdxks" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.722886 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx2lk\" (UniqueName: \"kubernetes.io/projected/d86d3d02-d07f-4bf0-a01a-18652faa5111-kube-api-access-sx2lk\") pod \"nova-operator-controller-manager-55bff696bd-jcrbt\" (UID: \"d86d3d02-d07f-4bf0-a01a-18652faa5111\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jcrbt" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.722949 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7hzz\" (UniqueName: \"kubernetes.io/projected/049ad615-904c-4043-b395-dd242e743140-kube-api-access-q7hzz\") pod \"octavia-operator-controller-manager-6687f8d877-8xwhl\" (UID: \"049ad615-904c-4043-b395-dd242e743140\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8xwhl" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.733029 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.733996 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.746121 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-54t2q" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.746365 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.747999 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4m8x2" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.773238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h77r\" (UniqueName: \"kubernetes.io/projected/7c8b7b2a-0a2f-4a69-b538-6afa0fb7e138-kube-api-access-7h77r\") pod \"mariadb-operator-controller-manager-67bf948998-fdxks\" (UID: \"7c8b7b2a-0a2f-4a69-b538-6afa0fb7e138\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fdxks" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.782405 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-jl4p7" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.789828 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.792147 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.796190 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-gn94s" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.797851 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ptdq"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.799025 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ptdq" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.807404 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zwczt" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.807855 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-4445f" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.823942 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx\" (UID: \"f59551da-68de-4704-98fd-d9355e69c5af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.824013 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9dd7\" (UniqueName: \"kubernetes.io/projected/acc59b3c-877b-4a0e-a118-76a05d362ad5-kube-api-access-w9dd7\") pod \"placement-operator-controller-manager-5b964cf4cd-9ptdq\" (UID: \"acc59b3c-877b-4a0e-a118-76a05d362ad5\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ptdq" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.824039 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jqcb\" (UniqueName: \"kubernetes.io/projected/f59551da-68de-4704-98fd-d9355e69c5af-kube-api-access-6jqcb\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx\" (UID: \"f59551da-68de-4704-98fd-d9355e69c5af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.824061 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf4r9\" (UniqueName: \"kubernetes.io/projected/e7569d69-3ad6-4127-a49f-a16706a35099-kube-api-access-wf4r9\") pod \"neutron-operator-controller-manager-585dbc889-c9pxb\" (UID: \"e7569d69-3ad6-4127-a49f-a16706a35099\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-c9pxb" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.824086 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx2lk\" (UniqueName: \"kubernetes.io/projected/d86d3d02-d07f-4bf0-a01a-18652faa5111-kube-api-access-sx2lk\") pod \"nova-operator-controller-manager-55bff696bd-jcrbt\" (UID: \"d86d3d02-d07f-4bf0-a01a-18652faa5111\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jcrbt" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.824145 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7hzz\" (UniqueName: \"kubernetes.io/projected/049ad615-904c-4043-b395-dd242e743140-kube-api-access-q7hzz\") pod \"octavia-operator-controller-manager-6687f8d877-8xwhl\" (UID: \"049ad615-904c-4043-b395-dd242e743140\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8xwhl" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.824171 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwqht\" (UniqueName: \"kubernetes.io/projected/6d39a5f9-a0b0-4a9e-871b-30a307adfd3d-kube-api-access-hwqht\") pod \"ovn-operator-controller-manager-788c46999f-t6wm8\" (UID: \"6d39a5f9-a0b0-4a9e-871b-30a307adfd3d\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.837822 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.849900 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.850776 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.854637 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-dgv6h" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.873255 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ptdq"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.903288 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-jtr9c" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.920475 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fdxks" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.921192 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4"] Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.938535 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9dd7\" (UniqueName: \"kubernetes.io/projected/acc59b3c-877b-4a0e-a118-76a05d362ad5-kube-api-access-w9dd7\") pod \"placement-operator-controller-manager-5b964cf4cd-9ptdq\" (UID: \"acc59b3c-877b-4a0e-a118-76a05d362ad5\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ptdq" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.938661 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jqcb\" (UniqueName: \"kubernetes.io/projected/f59551da-68de-4704-98fd-d9355e69c5af-kube-api-access-6jqcb\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx\" (UID: \"f59551da-68de-4704-98fd-d9355e69c5af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.938837 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwqht\" (UniqueName: \"kubernetes.io/projected/6d39a5f9-a0b0-4a9e-871b-30a307adfd3d-kube-api-access-hwqht\") pod \"ovn-operator-controller-manager-788c46999f-t6wm8\" (UID: \"6d39a5f9-a0b0-4a9e-871b-30a307adfd3d\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.938897 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx\" (UID: \"f59551da-68de-4704-98fd-d9355e69c5af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.938926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9jmf\" (UniqueName: \"kubernetes.io/projected/aec1c2a0-52b1-4a2e-8986-1e12be79d67c-kube-api-access-q9jmf\") pod \"swift-operator-controller-manager-68fc8c869-m49v4\" (UID: \"aec1c2a0-52b1-4a2e-8986-1e12be79d67c\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4" Jan 31 04:58:36 crc kubenswrapper[4832]: E0131 04:58:36.941408 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:58:36 crc kubenswrapper[4832]: E0131 04:58:36.941591 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert podName:f59551da-68de-4704-98fd-d9355e69c5af nodeName:}" failed. No retries permitted until 2026-01-31 04:58:37.441552872 +0000 UTC m=+926.390374557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" (UID: "f59551da-68de-4704-98fd-d9355e69c5af") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.953412 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf4r9\" (UniqueName: \"kubernetes.io/projected/e7569d69-3ad6-4127-a49f-a16706a35099-kube-api-access-wf4r9\") pod \"neutron-operator-controller-manager-585dbc889-c9pxb\" (UID: \"e7569d69-3ad6-4127-a49f-a16706a35099\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-c9pxb" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.960995 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7hzz\" (UniqueName: \"kubernetes.io/projected/049ad615-904c-4043-b395-dd242e743140-kube-api-access-q7hzz\") pod \"octavia-operator-controller-manager-6687f8d877-8xwhl\" (UID: \"049ad615-904c-4043-b395-dd242e743140\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8xwhl" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.961793 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx2lk\" (UniqueName: \"kubernetes.io/projected/d86d3d02-d07f-4bf0-a01a-18652faa5111-kube-api-access-sx2lk\") pod \"nova-operator-controller-manager-55bff696bd-jcrbt\" (UID: \"d86d3d02-d07f-4bf0-a01a-18652faa5111\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jcrbt" Jan 31 04:58:36 crc kubenswrapper[4832]: I0131 04:58:36.996447 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jqcb\" (UniqueName: \"kubernetes.io/projected/f59551da-68de-4704-98fd-d9355e69c5af-kube-api-access-6jqcb\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx\" (UID: \"f59551da-68de-4704-98fd-d9355e69c5af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.032825 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwqht\" (UniqueName: \"kubernetes.io/projected/6d39a5f9-a0b0-4a9e-871b-30a307adfd3d-kube-api-access-hwqht\") pod \"ovn-operator-controller-manager-788c46999f-t6wm8\" (UID: \"6d39a5f9-a0b0-4a9e-871b-30a307adfd3d\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.033327 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jcrbt" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.044340 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9jmf\" (UniqueName: \"kubernetes.io/projected/aec1c2a0-52b1-4a2e-8986-1e12be79d67c-kube-api-access-q9jmf\") pod \"swift-operator-controller-manager-68fc8c869-m49v4\" (UID: \"aec1c2a0-52b1-4a2e-8986-1e12be79d67c\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.049624 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx"] Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.049796 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8xwhl" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.050246 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9dd7\" (UniqueName: \"kubernetes.io/projected/acc59b3c-877b-4a0e-a118-76a05d362ad5-kube-api-access-w9dd7\") pod \"placement-operator-controller-manager-5b964cf4cd-9ptdq\" (UID: \"acc59b3c-877b-4a0e-a118-76a05d362ad5\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ptdq" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.086631 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw"] Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.088104 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.100499 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ngjt2" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.106048 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw"] Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.112599 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9jmf\" (UniqueName: \"kubernetes.io/projected/aec1c2a0-52b1-4a2e-8986-1e12be79d67c-kube-api-access-q9jmf\") pod \"swift-operator-controller-manager-68fc8c869-m49v4\" (UID: \"aec1c2a0-52b1-4a2e-8986-1e12be79d67c\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.134252 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x"] Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.135657 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.139041 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-dng5s" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.140479 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x"] Jan 31 04:58:37 crc kubenswrapper[4832]: E0131 04:58:37.148923 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:58:37 crc kubenswrapper[4832]: E0131 04:58:37.149014 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert podName:48118fb9-dcf4-45f5-8096-c558f980eab4 nodeName:}" failed. No retries permitted until 2026-01-31 04:58:38.148982097 +0000 UTC m=+927.097803782 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert") pod "infra-operator-controller-manager-57997b5fcd-hjsbn" (UID: "48118fb9-dcf4-45f5-8096-c558f980eab4") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.149955 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert\") pod \"infra-operator-controller-manager-57997b5fcd-hjsbn\" (UID: \"48118fb9-dcf4-45f5-8096-c558f980eab4\") " pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.150026 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mrgw\" (UniqueName: \"kubernetes.io/projected/551db33b-8ad9-4a8c-9275-2c19c1104232-kube-api-access-5mrgw\") pod \"telemetry-operator-controller-manager-64b5b76f97-r2nnw\" (UID: \"551db33b-8ad9-4a8c-9275-2c19c1104232\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.162332 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.172852 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-7t5cl"] Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.173887 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-7t5cl" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.177543 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-dlrr8" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.188538 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-7t5cl"] Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.194380 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ptdq" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.226795 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.232975 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-c9pxb" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.252375 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm5km\" (UniqueName: \"kubernetes.io/projected/9ba96940-214f-41b4-a1a2-ecdeced92715-kube-api-access-hm5km\") pod \"test-operator-controller-manager-56f8bfcd9f-2vw9x\" (UID: \"9ba96940-214f-41b4-a1a2-ecdeced92715\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.252471 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrhnf\" (UniqueName: \"kubernetes.io/projected/e6c1771e-5f66-444c-8718-e6022bbbb473-kube-api-access-xrhnf\") pod \"watcher-operator-controller-manager-564965969-7t5cl\" (UID: \"e6c1771e-5f66-444c-8718-e6022bbbb473\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-7t5cl" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.252498 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mrgw\" (UniqueName: \"kubernetes.io/projected/551db33b-8ad9-4a8c-9275-2c19c1104232-kube-api-access-5mrgw\") pod \"telemetry-operator-controller-manager-64b5b76f97-r2nnw\" (UID: \"551db33b-8ad9-4a8c-9275-2c19c1104232\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.281850 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mrgw\" (UniqueName: \"kubernetes.io/projected/551db33b-8ad9-4a8c-9275-2c19c1104232-kube-api-access-5mrgw\") pod \"telemetry-operator-controller-manager-64b5b76f97-r2nnw\" (UID: \"551db33b-8ad9-4a8c-9275-2c19c1104232\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.353342 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm5km\" (UniqueName: \"kubernetes.io/projected/9ba96940-214f-41b4-a1a2-ecdeced92715-kube-api-access-hm5km\") pod \"test-operator-controller-manager-56f8bfcd9f-2vw9x\" (UID: \"9ba96940-214f-41b4-a1a2-ecdeced92715\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.353418 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrhnf\" (UniqueName: \"kubernetes.io/projected/e6c1771e-5f66-444c-8718-e6022bbbb473-kube-api-access-xrhnf\") pod \"watcher-operator-controller-manager-564965969-7t5cl\" (UID: \"e6c1771e-5f66-444c-8718-e6022bbbb473\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-7t5cl" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.385155 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h"] Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.387096 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.389552 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrhnf\" (UniqueName: \"kubernetes.io/projected/e6c1771e-5f66-444c-8718-e6022bbbb473-kube-api-access-xrhnf\") pod \"watcher-operator-controller-manager-564965969-7t5cl\" (UID: \"e6c1771e-5f66-444c-8718-e6022bbbb473\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-7t5cl" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.391424 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.391713 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h"] Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.391489 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-l78n5" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.391533 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.395059 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm5km\" (UniqueName: \"kubernetes.io/projected/9ba96940-214f-41b4-a1a2-ecdeced92715-kube-api-access-hm5km\") pod \"test-operator-controller-manager-56f8bfcd9f-2vw9x\" (UID: \"9ba96940-214f-41b4-a1a2-ecdeced92715\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.396128 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8sg7w"] Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.397300 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8sg7w" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.399941 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-2zpmb" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.400254 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8sg7w"] Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.456164 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.456293 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n6tq\" (UniqueName: \"kubernetes.io/projected/f4c4da15-1fd0-4a3f-962e-6a9c4ce10cf3-kube-api-access-7n6tq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8sg7w\" (UID: \"f4c4da15-1fd0-4a3f-962e-6a9c4ce10cf3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8sg7w" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.456327 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.456385 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp45b\" (UniqueName: \"kubernetes.io/projected/fedc767a-c749-4373-84ab-c32673c34e40-kube-api-access-bp45b\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.456452 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx\" (UID: \"f59551da-68de-4704-98fd-d9355e69c5af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" Jan 31 04:58:37 crc kubenswrapper[4832]: E0131 04:58:37.456656 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:58:37 crc kubenswrapper[4832]: E0131 04:58:37.456727 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert podName:f59551da-68de-4704-98fd-d9355e69c5af nodeName:}" failed. No retries permitted until 2026-01-31 04:58:38.456708109 +0000 UTC m=+927.405529794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" (UID: "f59551da-68de-4704-98fd-d9355e69c5af") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.545795 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.559312 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n6tq\" (UniqueName: \"kubernetes.io/projected/f4c4da15-1fd0-4a3f-962e-6a9c4ce10cf3-kube-api-access-7n6tq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8sg7w\" (UID: \"f4c4da15-1fd0-4a3f-962e-6a9c4ce10cf3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8sg7w" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.559376 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.559433 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp45b\" (UniqueName: \"kubernetes.io/projected/fedc767a-c749-4373-84ab-c32673c34e40-kube-api-access-bp45b\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.559479 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:37 crc kubenswrapper[4832]: E0131 04:58:37.559679 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:58:37 crc kubenswrapper[4832]: E0131 04:58:37.559746 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs podName:fedc767a-c749-4373-84ab-c32673c34e40 nodeName:}" failed. No retries permitted until 2026-01-31 04:58:38.059723789 +0000 UTC m=+927.008545474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs") pod "openstack-operator-controller-manager-68ffd75798-45z7h" (UID: "fedc767a-c749-4373-84ab-c32673c34e40") : secret "webhook-server-cert" not found Jan 31 04:58:37 crc kubenswrapper[4832]: E0131 04:58:37.561453 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:58:37 crc kubenswrapper[4832]: E0131 04:58:37.561627 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs podName:fedc767a-c749-4373-84ab-c32673c34e40 nodeName:}" failed. No retries permitted until 2026-01-31 04:58:38.061616858 +0000 UTC m=+927.010438543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs") pod "openstack-operator-controller-manager-68ffd75798-45z7h" (UID: "fedc767a-c749-4373-84ab-c32673c34e40") : secret "metrics-server-cert" not found Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.580528 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.584687 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n6tq\" (UniqueName: \"kubernetes.io/projected/f4c4da15-1fd0-4a3f-962e-6a9c4ce10cf3-kube-api-access-7n6tq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8sg7w\" (UID: \"f4c4da15-1fd0-4a3f-962e-6a9c4ce10cf3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8sg7w" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.585059 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp45b\" (UniqueName: \"kubernetes.io/projected/fedc767a-c749-4373-84ab-c32673c34e40-kube-api-access-bp45b\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.594970 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-7t5cl" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.653049 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-gc6zx"] Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.688215 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8sg7w" Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.732514 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-5hl82"] Jan 31 04:58:37 crc kubenswrapper[4832]: I0131 04:58:37.754705 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zr7l4"] Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.011116 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-8xwhl"] Jan 31 04:58:38 crc kubenswrapper[4832]: W0131 04:58:38.021424 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod049ad615_904c_4043_b395_dd242e743140.slice/crio-a7235ed8447538a84a7d105c22fe6517f8e6b020f6e851605c2a2c54d869ea8b WatchSource:0}: Error finding container a7235ed8447538a84a7d105c22fe6517f8e6b020f6e851605c2a2c54d869ea8b: Status 404 returned error can't find the container with id a7235ed8447538a84a7d105c22fe6517f8e6b020f6e851605c2a2c54d869ea8b Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.022632 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-4m8x2"] Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.040664 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-4445f"] Jan 31 04:58:38 crc kubenswrapper[4832]: W0131 04:58:38.061081 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod141c81b8_f2f6_4f96_9ac7_83305f4eabd0.slice/crio-77b971ad224e07766ed061bf3ad8e406f9d1f1e5d68e78d362e4a0a1ac31caaa WatchSource:0}: Error finding container 77b971ad224e07766ed061bf3ad8e406f9d1f1e5d68e78d362e4a0a1ac31caaa: Status 404 returned error can't find the container with id 77b971ad224e07766ed061bf3ad8e406f9d1f1e5d68e78d362e4a0a1ac31caaa Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.066531 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-2hr2t"] Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.072290 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.072375 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.072479 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.072521 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs podName:fedc767a-c749-4373-84ab-c32673c34e40 nodeName:}" failed. No retries permitted until 2026-01-31 04:58:39.072506632 +0000 UTC m=+928.021328317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs") pod "openstack-operator-controller-manager-68ffd75798-45z7h" (UID: "fedc767a-c749-4373-84ab-c32673c34e40") : secret "webhook-server-cert" not found Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.072573 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.072720 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs podName:fedc767a-c749-4373-84ab-c32673c34e40 nodeName:}" failed. No retries permitted until 2026-01-31 04:58:39.072676718 +0000 UTC m=+928.021498403 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs") pod "openstack-operator-controller-manager-68ffd75798-45z7h" (UID: "fedc767a-c749-4373-84ab-c32673c34e40") : secret "metrics-server-cert" not found Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.074377 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-jtr9c"] Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.124635 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-gc6zx" event={"ID":"805b9f0e-cb57-4b71-b199-b8ee289af169","Type":"ContainerStarted","Data":"da347b8bccbe95326208a110e0a56b9a42dede45b3a82110ab60f96ee0b01644"} Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.126682 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8xwhl" event={"ID":"049ad615-904c-4043-b395-dd242e743140","Type":"ContainerStarted","Data":"a7235ed8447538a84a7d105c22fe6517f8e6b020f6e851605c2a2c54d869ea8b"} Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.127611 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-jtr9c" event={"ID":"ea8d7014-c1f0-4b4f-aa01-7865124c3187","Type":"ContainerStarted","Data":"468c29a5cabfc94a1e8cedefccd2284d76eb665f5532fce40306e9b6e634d3bb"} Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.129124 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4m8x2" event={"ID":"b23fff55-653f-417a-9f77-d7b115586ade","Type":"ContainerStarted","Data":"c194a98643ff5cff152bc74b55083910843336481155823be13694685196938e"} Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.133443 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-4445f" event={"ID":"830a5967-5b56-4c70-8940-ef90cd945807","Type":"ContainerStarted","Data":"5ee28172ef0d9a5f08b4c5320150b71d371ab2b9329681a94993d561499f8a46"} Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.137807 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2hr2t" event={"ID":"141c81b8-f2f6-4f96-9ac7-83305f4eabd0","Type":"ContainerStarted","Data":"77b971ad224e07766ed061bf3ad8e406f9d1f1e5d68e78d362e4a0a1ac31caaa"} Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.144004 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5hl82" event={"ID":"c20a8bcb-8431-4318-9e7d-8f4ccaddfa8b","Type":"ContainerStarted","Data":"4870afc874274b03e5d82e992844ba2f4561d895d98b09aced98b5b723045e7d"} Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.145710 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zr7l4" event={"ID":"0e40cd6e-2cdb-4a24-82d8-d27fc4feb14d","Type":"ContainerStarted","Data":"763093d836020e2c3c03afc9a6f11f9895d71a719db93a4cbe5c06c933df241f"} Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.175071 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert\") pod \"infra-operator-controller-manager-57997b5fcd-hjsbn\" (UID: \"48118fb9-dcf4-45f5-8096-c558f980eab4\") " pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.175999 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.176098 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert podName:48118fb9-dcf4-45f5-8096-c558f980eab4 nodeName:}" failed. No retries permitted until 2026-01-31 04:58:40.17607136 +0000 UTC m=+929.124893035 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert") pod "infra-operator-controller-manager-57997b5fcd-hjsbn" (UID: "48118fb9-dcf4-45f5-8096-c558f980eab4") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.229498 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-jl4p7"] Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.243765 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-jcrbt"] Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.432675 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-fdxks"] Jan 31 04:58:38 crc kubenswrapper[4832]: W0131 04:58:38.438180 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c8b7b2a_0a2f_4a69_b538_6afa0fb7e138.slice/crio-236d830b428a636d29e2b33633eefd1458ac02622519ac0697a179576c8960bf WatchSource:0}: Error finding container 236d830b428a636d29e2b33633eefd1458ac02622519ac0697a179576c8960bf: Status 404 returned error can't find the container with id 236d830b428a636d29e2b33633eefd1458ac02622519ac0697a179576c8960bf Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.439254 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-pnktz"] Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.455638 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-c9pxb"] Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.460192 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ptdq"] Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.476672 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8"] Jan 31 04:58:38 crc kubenswrapper[4832]: W0131 04:58:38.477504 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacc59b3c_877b_4a0e_a118_76a05d362ad5.slice/crio-b4fa266832599ae64d8bfbd8c8ebf58e5241b61d90268c3d198b95d3fea8f5dd WatchSource:0}: Error finding container b4fa266832599ae64d8bfbd8c8ebf58e5241b61d90268c3d198b95d3fea8f5dd: Status 404 returned error can't find the container with id b4fa266832599ae64d8bfbd8c8ebf58e5241b61d90268c3d198b95d3fea8f5dd Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.481694 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw"] Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.486244 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx\" (UID: \"f59551da-68de-4704-98fd-d9355e69c5af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.486529 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.486614 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert podName:f59551da-68de-4704-98fd-d9355e69c5af nodeName:}" failed. No retries permitted until 2026-01-31 04:58:40.486595519 +0000 UTC m=+929.435417204 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" (UID: "f59551da-68de-4704-98fd-d9355e69c5af") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.503111 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4"] Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.504215 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5mrgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-r2nnw_openstack-operators(551db33b-8ad9-4a8c-9275-2c19c1104232): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.505807 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw" podUID="551db33b-8ad9-4a8c-9275-2c19c1104232" Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.511577 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x"] Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.520572 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hwqht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-t6wm8_openstack-operators(6d39a5f9-a0b0-4a9e-871b-30a307adfd3d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.521964 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-7t5cl"] Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.521994 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8" podUID="6d39a5f9-a0b0-4a9e-871b-30a307adfd3d" Jan 31 04:58:38 crc kubenswrapper[4832]: W0131 04:58:38.524156 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaec1c2a0_52b1_4a2e_8986_1e12be79d67c.slice/crio-7ea150a23a2960b2920046f09baf48c89f8d3edba38e77e098f1d1ff75108863 WatchSource:0}: Error finding container 7ea150a23a2960b2920046f09baf48c89f8d3edba38e77e098f1d1ff75108863: Status 404 returned error can't find the container with id 7ea150a23a2960b2920046f09baf48c89f8d3edba38e77e098f1d1ff75108863 Jan 31 04:58:38 crc kubenswrapper[4832]: I0131 04:58:38.531635 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8sg7w"] Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.532945 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hm5km,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-2vw9x_openstack-operators(9ba96940-214f-41b4-a1a2-ecdeced92715): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.533111 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xrhnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-7t5cl_openstack-operators(e6c1771e-5f66-444c-8718-e6022bbbb473): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.534097 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x" podUID="9ba96940-214f-41b4-a1a2-ecdeced92715" Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.534878 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-7t5cl" podUID="e6c1771e-5f66-444c-8718-e6022bbbb473" Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.547276 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q9jmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-m49v4_openstack-operators(aec1c2a0-52b1-4a2e-8986-1e12be79d67c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.548408 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4" podUID="aec1c2a0-52b1-4a2e-8986-1e12be79d67c" Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.594697 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7n6tq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-8sg7w_openstack-operators(f4c4da15-1fd0-4a3f-962e-6a9c4ce10cf3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:58:38 crc kubenswrapper[4832]: E0131 04:58:38.596805 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8sg7w" podUID="f4c4da15-1fd0-4a3f-962e-6a9c4ce10cf3" Jan 31 04:58:39 crc kubenswrapper[4832]: I0131 04:58:39.098575 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:39 crc kubenswrapper[4832]: I0131 04:58:39.098722 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:39 crc kubenswrapper[4832]: E0131 04:58:39.098880 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:58:39 crc kubenswrapper[4832]: E0131 04:58:39.098961 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:58:39 crc kubenswrapper[4832]: E0131 04:58:39.098975 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs podName:fedc767a-c749-4373-84ab-c32673c34e40 nodeName:}" failed. No retries permitted until 2026-01-31 04:58:41.098949645 +0000 UTC m=+930.047771520 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs") pod "openstack-operator-controller-manager-68ffd75798-45z7h" (UID: "fedc767a-c749-4373-84ab-c32673c34e40") : secret "webhook-server-cert" not found Jan 31 04:58:39 crc kubenswrapper[4832]: E0131 04:58:39.099063 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs podName:fedc767a-c749-4373-84ab-c32673c34e40 nodeName:}" failed. No retries permitted until 2026-01-31 04:58:41.099038738 +0000 UTC m=+930.047860613 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs") pod "openstack-operator-controller-manager-68ffd75798-45z7h" (UID: "fedc767a-c749-4373-84ab-c32673c34e40") : secret "metrics-server-cert" not found Jan 31 04:58:39 crc kubenswrapper[4832]: I0131 04:58:39.159699 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pnktz" event={"ID":"9d690743-c300-46c9-83d7-c416ba5aff83","Type":"ContainerStarted","Data":"010ccbdff504f32809ad33d4f9c79d4282ad3f64b3701114cde7231505e1ed6d"} Jan 31 04:58:39 crc kubenswrapper[4832]: I0131 04:58:39.163857 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw" event={"ID":"551db33b-8ad9-4a8c-9275-2c19c1104232","Type":"ContainerStarted","Data":"edbd3290f5f04cf0531011979ab48ea773a4c6be46f83472b4ce96280400e7d8"} Jan 31 04:58:39 crc kubenswrapper[4832]: E0131 04:58:39.165712 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw" podUID="551db33b-8ad9-4a8c-9275-2c19c1104232" Jan 31 04:58:39 crc kubenswrapper[4832]: I0131 04:58:39.175086 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fdxks" event={"ID":"7c8b7b2a-0a2f-4a69-b538-6afa0fb7e138","Type":"ContainerStarted","Data":"236d830b428a636d29e2b33633eefd1458ac02622519ac0697a179576c8960bf"} Jan 31 04:58:39 crc kubenswrapper[4832]: I0131 04:58:39.177394 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-c9pxb" event={"ID":"e7569d69-3ad6-4127-a49f-a16706a35099","Type":"ContainerStarted","Data":"96b6657fa28a82a933ba889e470b06763eaff47f907a806db707b78d8a92a9c4"} Jan 31 04:58:39 crc kubenswrapper[4832]: I0131 04:58:39.179528 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ptdq" event={"ID":"acc59b3c-877b-4a0e-a118-76a05d362ad5","Type":"ContainerStarted","Data":"b4fa266832599ae64d8bfbd8c8ebf58e5241b61d90268c3d198b95d3fea8f5dd"} Jan 31 04:58:39 crc kubenswrapper[4832]: I0131 04:58:39.182042 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8sg7w" event={"ID":"f4c4da15-1fd0-4a3f-962e-6a9c4ce10cf3","Type":"ContainerStarted","Data":"ffea64f6b2d52417ad439692864a7cb924ac40d1465736d6b218d57bc0f83108"} Jan 31 04:58:39 crc kubenswrapper[4832]: I0131 04:58:39.186874 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4" event={"ID":"aec1c2a0-52b1-4a2e-8986-1e12be79d67c","Type":"ContainerStarted","Data":"7ea150a23a2960b2920046f09baf48c89f8d3edba38e77e098f1d1ff75108863"} Jan 31 04:58:39 crc kubenswrapper[4832]: E0131 04:58:39.187918 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8sg7w" podUID="f4c4da15-1fd0-4a3f-962e-6a9c4ce10cf3" Jan 31 04:58:39 crc kubenswrapper[4832]: E0131 04:58:39.191633 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4" podUID="aec1c2a0-52b1-4a2e-8986-1e12be79d67c" Jan 31 04:58:39 crc kubenswrapper[4832]: I0131 04:58:39.193253 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-jl4p7" event={"ID":"0ac9e56c-c068-4fec-98d8-8d44e1fa6ccd","Type":"ContainerStarted","Data":"3abd959523c4bea23ec3034801a0f506dbad8b9e8d62df1e2c69e472a2e6e271"} Jan 31 04:58:39 crc kubenswrapper[4832]: I0131 04:58:39.247319 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x" event={"ID":"9ba96940-214f-41b4-a1a2-ecdeced92715","Type":"ContainerStarted","Data":"6d0cd90df6bfab750c81da95a66c071c8b4c4debd0917a7f3e476d314b79d640"} Jan 31 04:58:39 crc kubenswrapper[4832]: I0131 04:58:39.248816 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8" event={"ID":"6d39a5f9-a0b0-4a9e-871b-30a307adfd3d","Type":"ContainerStarted","Data":"9616bd0dd4a81e6b8c2d80e07c5456395f0c9c4a72231bec09d47af3ee1338ee"} Jan 31 04:58:39 crc kubenswrapper[4832]: I0131 04:58:39.250592 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jcrbt" event={"ID":"d86d3d02-d07f-4bf0-a01a-18652faa5111","Type":"ContainerStarted","Data":"6c02786a1a92a5b470adda999036f528a123266fb0a4ab565f7455c306716039"} Jan 31 04:58:39 crc kubenswrapper[4832]: I0131 04:58:39.251731 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-7t5cl" event={"ID":"e6c1771e-5f66-444c-8718-e6022bbbb473","Type":"ContainerStarted","Data":"6d4d759b809e66ca0e65974d43b47ebfdf5463bdb06c601e8488b31e8296b488"} Jan 31 04:58:39 crc kubenswrapper[4832]: E0131 04:58:39.259469 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x" podUID="9ba96940-214f-41b4-a1a2-ecdeced92715" Jan 31 04:58:39 crc kubenswrapper[4832]: E0131 04:58:39.266021 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-7t5cl" podUID="e6c1771e-5f66-444c-8718-e6022bbbb473" Jan 31 04:58:39 crc kubenswrapper[4832]: E0131 04:58:39.266132 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8" podUID="6d39a5f9-a0b0-4a9e-871b-30a307adfd3d" Jan 31 04:58:40 crc kubenswrapper[4832]: I0131 04:58:40.228845 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert\") pod \"infra-operator-controller-manager-57997b5fcd-hjsbn\" (UID: \"48118fb9-dcf4-45f5-8096-c558f980eab4\") " pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" Jan 31 04:58:40 crc kubenswrapper[4832]: E0131 04:58:40.229098 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:58:40 crc kubenswrapper[4832]: E0131 04:58:40.229154 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert podName:48118fb9-dcf4-45f5-8096-c558f980eab4 nodeName:}" failed. No retries permitted until 2026-01-31 04:58:44.229138713 +0000 UTC m=+933.177960398 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert") pod "infra-operator-controller-manager-57997b5fcd-hjsbn" (UID: "48118fb9-dcf4-45f5-8096-c558f980eab4") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:58:40 crc kubenswrapper[4832]: E0131 04:58:40.288934 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x" podUID="9ba96940-214f-41b4-a1a2-ecdeced92715" Jan 31 04:58:40 crc kubenswrapper[4832]: E0131 04:58:40.288992 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-7t5cl" podUID="e6c1771e-5f66-444c-8718-e6022bbbb473" Jan 31 04:58:40 crc kubenswrapper[4832]: E0131 04:58:40.288994 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw" podUID="551db33b-8ad9-4a8c-9275-2c19c1104232" Jan 31 04:58:40 crc kubenswrapper[4832]: E0131 04:58:40.289074 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8sg7w" podUID="f4c4da15-1fd0-4a3f-962e-6a9c4ce10cf3" Jan 31 04:58:40 crc kubenswrapper[4832]: E0131 04:58:40.289119 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8" podUID="6d39a5f9-a0b0-4a9e-871b-30a307adfd3d" Jan 31 04:58:40 crc kubenswrapper[4832]: E0131 04:58:40.298528 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4" podUID="aec1c2a0-52b1-4a2e-8986-1e12be79d67c" Jan 31 04:58:40 crc kubenswrapper[4832]: I0131 04:58:40.537356 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx\" (UID: \"f59551da-68de-4704-98fd-d9355e69c5af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" Jan 31 04:58:40 crc kubenswrapper[4832]: E0131 04:58:40.537634 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:58:40 crc kubenswrapper[4832]: E0131 04:58:40.537848 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert podName:f59551da-68de-4704-98fd-d9355e69c5af nodeName:}" failed. No retries permitted until 2026-01-31 04:58:44.537824083 +0000 UTC m=+933.486645768 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" (UID: "f59551da-68de-4704-98fd-d9355e69c5af") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:58:41 crc kubenswrapper[4832]: I0131 04:58:41.149459 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:41 crc kubenswrapper[4832]: I0131 04:58:41.149652 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:41 crc kubenswrapper[4832]: E0131 04:58:41.149897 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:58:41 crc kubenswrapper[4832]: E0131 04:58:41.149967 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs podName:fedc767a-c749-4373-84ab-c32673c34e40 nodeName:}" failed. No retries permitted until 2026-01-31 04:58:45.149947013 +0000 UTC m=+934.098768698 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs") pod "openstack-operator-controller-manager-68ffd75798-45z7h" (UID: "fedc767a-c749-4373-84ab-c32673c34e40") : secret "webhook-server-cert" not found Jan 31 04:58:41 crc kubenswrapper[4832]: E0131 04:58:41.150590 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:58:41 crc kubenswrapper[4832]: E0131 04:58:41.151000 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs podName:fedc767a-c749-4373-84ab-c32673c34e40 nodeName:}" failed. No retries permitted until 2026-01-31 04:58:45.150786789 +0000 UTC m=+934.099608464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs") pod "openstack-operator-controller-manager-68ffd75798-45z7h" (UID: "fedc767a-c749-4373-84ab-c32673c34e40") : secret "metrics-server-cert" not found Jan 31 04:58:44 crc kubenswrapper[4832]: I0131 04:58:44.234011 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert\") pod \"infra-operator-controller-manager-57997b5fcd-hjsbn\" (UID: \"48118fb9-dcf4-45f5-8096-c558f980eab4\") " pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" Jan 31 04:58:44 crc kubenswrapper[4832]: E0131 04:58:44.234160 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:58:44 crc kubenswrapper[4832]: E0131 04:58:44.234479 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert podName:48118fb9-dcf4-45f5-8096-c558f980eab4 nodeName:}" failed. No retries permitted until 2026-01-31 04:58:52.234462444 +0000 UTC m=+941.183284129 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert") pod "infra-operator-controller-manager-57997b5fcd-hjsbn" (UID: "48118fb9-dcf4-45f5-8096-c558f980eab4") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:58:44 crc kubenswrapper[4832]: I0131 04:58:44.540398 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx\" (UID: \"f59551da-68de-4704-98fd-d9355e69c5af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" Jan 31 04:58:44 crc kubenswrapper[4832]: E0131 04:58:44.540636 4832 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:58:44 crc kubenswrapper[4832]: E0131 04:58:44.540751 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert podName:f59551da-68de-4704-98fd-d9355e69c5af nodeName:}" failed. No retries permitted until 2026-01-31 04:58:52.540730471 +0000 UTC m=+941.489552156 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" (UID: "f59551da-68de-4704-98fd-d9355e69c5af") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:58:45 crc kubenswrapper[4832]: I0131 04:58:45.156159 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:45 crc kubenswrapper[4832]: I0131 04:58:45.156353 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:45 crc kubenswrapper[4832]: E0131 04:58:45.156438 4832 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:58:45 crc kubenswrapper[4832]: E0131 04:58:45.156556 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs podName:fedc767a-c749-4373-84ab-c32673c34e40 nodeName:}" failed. No retries permitted until 2026-01-31 04:58:53.156525544 +0000 UTC m=+942.105347279 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs") pod "openstack-operator-controller-manager-68ffd75798-45z7h" (UID: "fedc767a-c749-4373-84ab-c32673c34e40") : secret "metrics-server-cert" not found Jan 31 04:58:45 crc kubenswrapper[4832]: E0131 04:58:45.156598 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:58:45 crc kubenswrapper[4832]: E0131 04:58:45.156670 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs podName:fedc767a-c749-4373-84ab-c32673c34e40 nodeName:}" failed. No retries permitted until 2026-01-31 04:58:53.156656598 +0000 UTC m=+942.105478323 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs") pod "openstack-operator-controller-manager-68ffd75798-45z7h" (UID: "fedc767a-c749-4373-84ab-c32673c34e40") : secret "webhook-server-cert" not found Jan 31 04:58:46 crc kubenswrapper[4832]: I0131 04:58:46.571473 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hpg4j"] Jan 31 04:58:46 crc kubenswrapper[4832]: I0131 04:58:46.574496 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:58:46 crc kubenswrapper[4832]: I0131 04:58:46.583986 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpg4j"] Jan 31 04:58:46 crc kubenswrapper[4832]: I0131 04:58:46.618400 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/871ed21f-5953-464f-9a56-e7b8597a586e-utilities\") pod \"redhat-marketplace-hpg4j\" (UID: \"871ed21f-5953-464f-9a56-e7b8597a586e\") " pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:58:46 crc kubenswrapper[4832]: I0131 04:58:46.618653 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr8cj\" (UniqueName: \"kubernetes.io/projected/871ed21f-5953-464f-9a56-e7b8597a586e-kube-api-access-kr8cj\") pod \"redhat-marketplace-hpg4j\" (UID: \"871ed21f-5953-464f-9a56-e7b8597a586e\") " pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:58:46 crc kubenswrapper[4832]: I0131 04:58:46.618850 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/871ed21f-5953-464f-9a56-e7b8597a586e-catalog-content\") pod \"redhat-marketplace-hpg4j\" (UID: \"871ed21f-5953-464f-9a56-e7b8597a586e\") " pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:58:46 crc kubenswrapper[4832]: I0131 04:58:46.719365 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/871ed21f-5953-464f-9a56-e7b8597a586e-utilities\") pod \"redhat-marketplace-hpg4j\" (UID: \"871ed21f-5953-464f-9a56-e7b8597a586e\") " pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:58:46 crc kubenswrapper[4832]: I0131 04:58:46.719463 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr8cj\" (UniqueName: \"kubernetes.io/projected/871ed21f-5953-464f-9a56-e7b8597a586e-kube-api-access-kr8cj\") pod \"redhat-marketplace-hpg4j\" (UID: \"871ed21f-5953-464f-9a56-e7b8597a586e\") " pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:58:46 crc kubenswrapper[4832]: I0131 04:58:46.719482 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/871ed21f-5953-464f-9a56-e7b8597a586e-catalog-content\") pod \"redhat-marketplace-hpg4j\" (UID: \"871ed21f-5953-464f-9a56-e7b8597a586e\") " pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:58:46 crc kubenswrapper[4832]: I0131 04:58:46.720042 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/871ed21f-5953-464f-9a56-e7b8597a586e-utilities\") pod \"redhat-marketplace-hpg4j\" (UID: \"871ed21f-5953-464f-9a56-e7b8597a586e\") " pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:58:46 crc kubenswrapper[4832]: I0131 04:58:46.720066 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/871ed21f-5953-464f-9a56-e7b8597a586e-catalog-content\") pod \"redhat-marketplace-hpg4j\" (UID: \"871ed21f-5953-464f-9a56-e7b8597a586e\") " pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:58:46 crc kubenswrapper[4832]: I0131 04:58:46.743716 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr8cj\" (UniqueName: \"kubernetes.io/projected/871ed21f-5953-464f-9a56-e7b8597a586e-kube-api-access-kr8cj\") pod \"redhat-marketplace-hpg4j\" (UID: \"871ed21f-5953-464f-9a56-e7b8597a586e\") " pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:58:46 crc kubenswrapper[4832]: I0131 04:58:46.908353 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:58:48 crc kubenswrapper[4832]: I0131 04:58:48.540055 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:58:48 crc kubenswrapper[4832]: I0131 04:58:48.540405 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:58:48 crc kubenswrapper[4832]: I0131 04:58:48.540467 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 04:58:48 crc kubenswrapper[4832]: I0131 04:58:48.541334 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54222fe11bae7b5928dfc35b129ce940cf361d675f70d08f1d3420ed1cc0952b"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:58:48 crc kubenswrapper[4832]: I0131 04:58:48.541412 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://54222fe11bae7b5928dfc35b129ce940cf361d675f70d08f1d3420ed1cc0952b" gracePeriod=600 Jan 31 04:58:49 crc kubenswrapper[4832]: I0131 04:58:49.401390 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="54222fe11bae7b5928dfc35b129ce940cf361d675f70d08f1d3420ed1cc0952b" exitCode=0 Jan 31 04:58:49 crc kubenswrapper[4832]: I0131 04:58:49.401524 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"54222fe11bae7b5928dfc35b129ce940cf361d675f70d08f1d3420ed1cc0952b"} Jan 31 04:58:49 crc kubenswrapper[4832]: I0131 04:58:49.401762 4832 scope.go:117] "RemoveContainer" containerID="4cfa232b4e7f9afe6aa34948e511fc13a64fb3b1d7193b3fae0b6644206b914b" Jan 31 04:58:52 crc kubenswrapper[4832]: I0131 04:58:52.315938 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert\") pod \"infra-operator-controller-manager-57997b5fcd-hjsbn\" (UID: \"48118fb9-dcf4-45f5-8096-c558f980eab4\") " pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" Jan 31 04:58:52 crc kubenswrapper[4832]: I0131 04:58:52.326102 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/48118fb9-dcf4-45f5-8096-c558f980eab4-cert\") pod \"infra-operator-controller-manager-57997b5fcd-hjsbn\" (UID: \"48118fb9-dcf4-45f5-8096-c558f980eab4\") " pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" Jan 31 04:58:52 crc kubenswrapper[4832]: I0131 04:58:52.387516 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" Jan 31 04:58:52 crc kubenswrapper[4832]: I0131 04:58:52.619870 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx\" (UID: \"f59551da-68de-4704-98fd-d9355e69c5af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" Jan 31 04:58:52 crc kubenswrapper[4832]: I0131 04:58:52.625751 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f59551da-68de-4704-98fd-d9355e69c5af-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx\" (UID: \"f59551da-68de-4704-98fd-d9355e69c5af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" Jan 31 04:58:52 crc kubenswrapper[4832]: I0131 04:58:52.745487 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" Jan 31 04:58:53 crc kubenswrapper[4832]: I0131 04:58:53.232124 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:53 crc kubenswrapper[4832]: I0131 04:58:53.232581 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:53 crc kubenswrapper[4832]: I0131 04:58:53.236363 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-metrics-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:53 crc kubenswrapper[4832]: I0131 04:58:53.240521 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fedc767a-c749-4373-84ab-c32673c34e40-webhook-certs\") pod \"openstack-operator-controller-manager-68ffd75798-45z7h\" (UID: \"fedc767a-c749-4373-84ab-c32673c34e40\") " pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:53 crc kubenswrapper[4832]: I0131 04:58:53.270006 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:58:53 crc kubenswrapper[4832]: E0131 04:58:53.787710 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Jan 31 04:58:53 crc kubenswrapper[4832]: E0131 04:58:53.787917 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q7hzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-8xwhl_openstack-operators(049ad615-904c-4043-b395-dd242e743140): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:58:53 crc kubenswrapper[4832]: E0131 04:58:53.789138 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8xwhl" podUID="049ad615-904c-4043-b395-dd242e743140" Jan 31 04:58:54 crc kubenswrapper[4832]: E0131 04:58:54.371281 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Jan 31 04:58:54 crc kubenswrapper[4832]: E0131 04:58:54.371516 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wf4r9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-c9pxb_openstack-operators(e7569d69-3ad6-4127-a49f-a16706a35099): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:58:54 crc kubenswrapper[4832]: E0131 04:58:54.372824 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-c9pxb" podUID="e7569d69-3ad6-4127-a49f-a16706a35099" Jan 31 04:58:54 crc kubenswrapper[4832]: E0131 04:58:54.452628 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8xwhl" podUID="049ad615-904c-4043-b395-dd242e743140" Jan 31 04:58:54 crc kubenswrapper[4832]: E0131 04:58:54.453058 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-c9pxb" podUID="e7569d69-3ad6-4127-a49f-a16706a35099" Jan 31 04:58:55 crc kubenswrapper[4832]: E0131 04:58:55.193450 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Jan 31 04:58:55 crc kubenswrapper[4832]: E0131 04:58:55.193926 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4b7c8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-4445f_openstack-operators(830a5967-5b56-4c70-8940-ef90cd945807): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:58:55 crc kubenswrapper[4832]: E0131 04:58:55.195083 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-4445f" podUID="830a5967-5b56-4c70-8940-ef90cd945807" Jan 31 04:58:55 crc kubenswrapper[4832]: E0131 04:58:55.459167 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-4445f" podUID="830a5967-5b56-4c70-8940-ef90cd945807" Jan 31 04:58:55 crc kubenswrapper[4832]: E0131 04:58:55.899742 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 31 04:58:55 crc kubenswrapper[4832]: E0131 04:58:55.900784 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sx2lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-jcrbt_openstack-operators(d86d3d02-d07f-4bf0-a01a-18652faa5111): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:58:55 crc kubenswrapper[4832]: E0131 04:58:55.902037 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jcrbt" podUID="d86d3d02-d07f-4bf0-a01a-18652faa5111" Jan 31 04:58:56 crc kubenswrapper[4832]: E0131 04:58:56.435957 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 31 04:58:56 crc kubenswrapper[4832]: E0131 04:58:56.436238 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gk8h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-jl4p7_openstack-operators(0ac9e56c-c068-4fec-98d8-8d44e1fa6ccd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:58:56 crc kubenswrapper[4832]: E0131 04:58:56.437503 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-jl4p7" podUID="0ac9e56c-c068-4fec-98d8-8d44e1fa6ccd" Jan 31 04:58:56 crc kubenswrapper[4832]: E0131 04:58:56.468168 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jcrbt" podUID="d86d3d02-d07f-4bf0-a01a-18652faa5111" Jan 31 04:58:56 crc kubenswrapper[4832]: E0131 04:58:56.469802 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-jl4p7" podUID="0ac9e56c-c068-4fec-98d8-8d44e1fa6ccd" Jan 31 04:59:00 crc kubenswrapper[4832]: I0131 04:59:00.770746 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpg4j"] Jan 31 04:59:01 crc kubenswrapper[4832]: W0131 04:59:01.206268 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod871ed21f_5953_464f_9a56_e7b8597a586e.slice/crio-52a1210759fa07c4fafd8a48bd8a859e387712a6c47f29f1370c765803224285 WatchSource:0}: Error finding container 52a1210759fa07c4fafd8a48bd8a859e387712a6c47f29f1370c765803224285: Status 404 returned error can't find the container with id 52a1210759fa07c4fafd8a48bd8a859e387712a6c47f29f1370c765803224285 Jan 31 04:59:01 crc kubenswrapper[4832]: I0131 04:59:01.515325 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpg4j" event={"ID":"871ed21f-5953-464f-9a56-e7b8597a586e","Type":"ContainerStarted","Data":"52a1210759fa07c4fafd8a48bd8a859e387712a6c47f29f1370c765803224285"} Jan 31 04:59:01 crc kubenswrapper[4832]: I0131 04:59:01.754359 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx"] Jan 31 04:59:01 crc kubenswrapper[4832]: I0131 04:59:01.835358 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h"] Jan 31 04:59:01 crc kubenswrapper[4832]: I0131 04:59:01.858386 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn"] Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.575942 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw" event={"ID":"551db33b-8ad9-4a8c-9275-2c19c1104232","Type":"ContainerStarted","Data":"abc23e8b37dadf7b988b95f41e9f1e256d70c11de4c0f8eb7fa76046e8f3d0a6"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.577262 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.604263 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4" event={"ID":"aec1c2a0-52b1-4a2e-8986-1e12be79d67c","Type":"ContainerStarted","Data":"240880391c19b63b722a9e5e6180db48bb6d53e8c85e9e2aba9d9d4d89e2015f"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.605060 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.615029 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" event={"ID":"f59551da-68de-4704-98fd-d9355e69c5af","Type":"ContainerStarted","Data":"70782824b4ae2ad0063cf0f5f6a0062c15a8133c9a40a5dea51d03cd81f8397c"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.644732 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" event={"ID":"48118fb9-dcf4-45f5-8096-c558f980eab4","Type":"ContainerStarted","Data":"75a8b72c5278261def5eae44eba63b61c5bb1e4bce2ab4ed82f4f3474fca7c1e"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.675407 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw" podStartSLOduration=3.862396378 podStartE2EDuration="26.675382661s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.50400751 +0000 UTC m=+927.452829195" lastFinishedPulling="2026-01-31 04:59:01.316993793 +0000 UTC m=+950.265815478" observedRunningTime="2026-01-31 04:59:02.612776725 +0000 UTC m=+951.561598410" watchObservedRunningTime="2026-01-31 04:59:02.675382661 +0000 UTC m=+951.624204346" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.698455 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-gc6zx" event={"ID":"805b9f0e-cb57-4b71-b199-b8ee289af169","Type":"ContainerStarted","Data":"ccc13b89d46381a685827bb05e709e72d539ad87306ecaa83ccbdb60b72d75c9"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.699437 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-gc6zx" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.724446 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4" podStartSLOduration=3.9055678670000002 podStartE2EDuration="26.724424304s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.547129089 +0000 UTC m=+927.495950774" lastFinishedPulling="2026-01-31 04:59:01.365985526 +0000 UTC m=+950.314807211" observedRunningTime="2026-01-31 04:59:02.656396761 +0000 UTC m=+951.605218446" watchObservedRunningTime="2026-01-31 04:59:02.724424304 +0000 UTC m=+951.673245989" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.733336 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"5f27820a852e7aad47dd943e170fd10884dd63d7a7b2bc83ff12b5f3f39f5de0"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.753319 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4m8x2" event={"ID":"b23fff55-653f-417a-9f77-d7b115586ade","Type":"ContainerStarted","Data":"c8521b7d0f29028c1169ac87052c16c643de38d26a7d397493a9d5944b9b5b1a"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.754574 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4m8x2" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.777933 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5hl82" event={"ID":"c20a8bcb-8431-4318-9e7d-8f4ccaddfa8b","Type":"ContainerStarted","Data":"63716620bd210449e9b987af00a5c6e6fa9a512180df4a1a73a78b12e1da8131"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.778821 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5hl82" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.794592 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-gc6zx" podStartSLOduration=7.208098863 podStartE2EDuration="26.794550683s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:37.66788414 +0000 UTC m=+926.616705825" lastFinishedPulling="2026-01-31 04:58:57.25433592 +0000 UTC m=+946.203157645" observedRunningTime="2026-01-31 04:59:02.742988142 +0000 UTC m=+951.691809827" watchObservedRunningTime="2026-01-31 04:59:02.794550683 +0000 UTC m=+951.743372368" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.796386 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pnktz" event={"ID":"9d690743-c300-46c9-83d7-c416ba5aff83","Type":"ContainerStarted","Data":"cb9a5f79fa5cce85c8b283c8649c6ceb63bfd9a894549fe4ff427282bd262fc1"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.797220 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pnktz" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.805027 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fdxks" event={"ID":"7c8b7b2a-0a2f-4a69-b538-6afa0fb7e138","Type":"ContainerStarted","Data":"93b8830d0004bfec960d5a7036633154bc7f0d16282364e50d07ead41061a39f"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.805521 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fdxks" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.819388 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4m8x2" podStartSLOduration=8.467600099 podStartE2EDuration="26.819356114s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.062274845 +0000 UTC m=+927.011096530" lastFinishedPulling="2026-01-31 04:58:56.41403087 +0000 UTC m=+945.362852545" observedRunningTime="2026-01-31 04:59:02.805853355 +0000 UTC m=+951.754675040" watchObservedRunningTime="2026-01-31 04:59:02.819356114 +0000 UTC m=+951.768177799" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.825088 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ptdq" event={"ID":"acc59b3c-877b-4a0e-a118-76a05d362ad5","Type":"ContainerStarted","Data":"afad3fee14475451ab873c5df0dc7808b58a3b8df819a78fb57ca3255d39e67e"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.825814 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ptdq" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.852613 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8sg7w" event={"ID":"f4c4da15-1fd0-4a3f-962e-6a9c4ce10cf3","Type":"ContainerStarted","Data":"ab2aed84a15c0ec6e7439ca30c97a6bc46a621ff5634bbfc402c4b236d86d27f"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.869911 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8" event={"ID":"6d39a5f9-a0b0-4a9e-871b-30a307adfd3d","Type":"ContainerStarted","Data":"15f585d906035bc5d6e29d70dea85a9d075a5a19ce777709e853d6ce523a2fc2"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.870897 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.874877 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5hl82" podStartSLOduration=8.792764822 podStartE2EDuration="26.874862409s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:37.7956399 +0000 UTC m=+926.744461585" lastFinishedPulling="2026-01-31 04:58:55.877737487 +0000 UTC m=+944.826559172" observedRunningTime="2026-01-31 04:59:02.860032308 +0000 UTC m=+951.808853993" watchObservedRunningTime="2026-01-31 04:59:02.874862409 +0000 UTC m=+951.823684094" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.886291 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-7t5cl" event={"ID":"e6c1771e-5f66-444c-8718-e6022bbbb473","Type":"ContainerStarted","Data":"2c7d890d8d8f662d14aec594d0fcc8d06e8643558bd9a40b92793b2ab098efd7"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.886986 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-7t5cl" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.909265 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8sg7w" podStartSLOduration=3.125005696 podStartE2EDuration="25.909247287s" podCreationTimestamp="2026-01-31 04:58:37 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.594520052 +0000 UTC m=+927.543341737" lastFinishedPulling="2026-01-31 04:59:01.378761623 +0000 UTC m=+950.327583328" observedRunningTime="2026-01-31 04:59:02.907797493 +0000 UTC m=+951.856619178" watchObservedRunningTime="2026-01-31 04:59:02.909247287 +0000 UTC m=+951.858068972" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.909689 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zr7l4" event={"ID":"0e40cd6e-2cdb-4a24-82d8-d27fc4feb14d","Type":"ContainerStarted","Data":"261ab6da3059f120b417716dfdbcef1bcd400fe7ec33c8dd91667d202eda0d44"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.910632 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zr7l4" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.912206 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x" event={"ID":"9ba96940-214f-41b4-a1a2-ecdeced92715","Type":"ContainerStarted","Data":"0d2918e55e4a65869ee10e331d47d31da1c7c65e18181b50fa1de54d613ad99b"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.912553 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.947060 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" event={"ID":"fedc767a-c749-4373-84ab-c32673c34e40","Type":"ContainerStarted","Data":"a9d13cdda947d94b1d6896d57a91578285ad8fe14baf6f919e56b66b48e7cf83"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.947720 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.961337 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fdxks" podStartSLOduration=8.168077131 podStartE2EDuration="26.961312405s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.467613909 +0000 UTC m=+927.416435594" lastFinishedPulling="2026-01-31 04:58:57.260849163 +0000 UTC m=+946.209670868" observedRunningTime="2026-01-31 04:59:02.948085304 +0000 UTC m=+951.896907009" watchObservedRunningTime="2026-01-31 04:59:02.961312405 +0000 UTC m=+951.910134090" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.978737 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2hr2t" event={"ID":"141c81b8-f2f6-4f96-9ac7-83305f4eabd0","Type":"ContainerStarted","Data":"fd0101d4c741428e69470cb32227bee7f2b7e7869287f0db5c13c8c11a670563"} Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.979656 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2hr2t" Jan 31 04:59:02 crc kubenswrapper[4832]: I0131 04:59:02.990960 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pnktz" podStartSLOduration=8.201915973 podStartE2EDuration="26.990944856s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.465978068 +0000 UTC m=+927.414799753" lastFinishedPulling="2026-01-31 04:58:57.255006931 +0000 UTC m=+946.203828636" observedRunningTime="2026-01-31 04:59:02.99075224 +0000 UTC m=+951.939573925" watchObservedRunningTime="2026-01-31 04:59:02.990944856 +0000 UTC m=+951.939766541" Jan 31 04:59:03 crc kubenswrapper[4832]: I0131 04:59:03.001516 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-jtr9c" event={"ID":"ea8d7014-c1f0-4b4f-aa01-7865124c3187","Type":"ContainerStarted","Data":"009c4fd79af145e3fea0d89c2bb46fba66f9d319a29567c99b1d67d02484accf"} Jan 31 04:59:03 crc kubenswrapper[4832]: I0131 04:59:03.002348 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-jtr9c" Jan 31 04:59:03 crc kubenswrapper[4832]: I0131 04:59:03.010522 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpg4j" event={"ID":"871ed21f-5953-464f-9a56-e7b8597a586e","Type":"ContainerStarted","Data":"7595e34e5d86ad042aeb44a629f85d59fba952656d7998797826b6fdb4be2d39"} Jan 31 04:59:03 crc kubenswrapper[4832]: I0131 04:59:03.026913 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ptdq" podStartSLOduration=8.257951565 podStartE2EDuration="27.026892173s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.484867735 +0000 UTC m=+927.433689420" lastFinishedPulling="2026-01-31 04:58:57.253808343 +0000 UTC m=+946.202630028" observedRunningTime="2026-01-31 04:59:03.026829321 +0000 UTC m=+951.975651006" watchObservedRunningTime="2026-01-31 04:59:03.026892173 +0000 UTC m=+951.975713858" Jan 31 04:59:03 crc kubenswrapper[4832]: I0131 04:59:03.087724 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-jtr9c" podStartSLOduration=7.911602503 podStartE2EDuration="27.087706243s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.078197989 +0000 UTC m=+927.027019674" lastFinishedPulling="2026-01-31 04:58:57.254301719 +0000 UTC m=+946.203123414" observedRunningTime="2026-01-31 04:59:03.052945172 +0000 UTC m=+952.001766857" watchObservedRunningTime="2026-01-31 04:59:03.087706243 +0000 UTC m=+952.036527928" Jan 31 04:59:03 crc kubenswrapper[4832]: I0131 04:59:03.131693 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8" podStartSLOduration=4.416685189 podStartE2EDuration="27.131666498s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.520409729 +0000 UTC m=+927.469231414" lastFinishedPulling="2026-01-31 04:59:01.235391038 +0000 UTC m=+950.184212723" observedRunningTime="2026-01-31 04:59:03.117641253 +0000 UTC m=+952.066462938" watchObservedRunningTime="2026-01-31 04:59:03.131666498 +0000 UTC m=+952.080488183" Jan 31 04:59:03 crc kubenswrapper[4832]: I0131 04:59:03.194956 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2hr2t" podStartSLOduration=8.845980386 podStartE2EDuration="27.194933944s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.065046351 +0000 UTC m=+927.013868036" lastFinishedPulling="2026-01-31 04:58:56.413999879 +0000 UTC m=+945.362821594" observedRunningTime="2026-01-31 04:59:03.162047972 +0000 UTC m=+952.110869657" watchObservedRunningTime="2026-01-31 04:59:03.194933944 +0000 UTC m=+952.143755629" Jan 31 04:59:03 crc kubenswrapper[4832]: I0131 04:59:03.205142 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-7t5cl" podStartSLOduration=4.503327142 podStartE2EDuration="27.205116701s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.533030571 +0000 UTC m=+927.481852256" lastFinishedPulling="2026-01-31 04:59:01.23482012 +0000 UTC m=+950.183641815" observedRunningTime="2026-01-31 04:59:03.190520857 +0000 UTC m=+952.139342542" watchObservedRunningTime="2026-01-31 04:59:03.205116701 +0000 UTC m=+952.153938386" Jan 31 04:59:03 crc kubenswrapper[4832]: I0131 04:59:03.219374 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zr7l4" podStartSLOduration=8.576130299999999 podStartE2EDuration="27.219351603s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:37.79564109 +0000 UTC m=+926.744462775" lastFinishedPulling="2026-01-31 04:58:56.438862393 +0000 UTC m=+945.387684078" observedRunningTime="2026-01-31 04:59:03.216915647 +0000 UTC m=+952.165737322" watchObservedRunningTime="2026-01-31 04:59:03.219351603 +0000 UTC m=+952.168173278" Jan 31 04:59:03 crc kubenswrapper[4832]: I0131 04:59:03.258638 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x" podStartSLOduration=4.555334658 podStartE2EDuration="27.258617943s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.532772854 +0000 UTC m=+927.481594539" lastFinishedPulling="2026-01-31 04:59:01.236056119 +0000 UTC m=+950.184877824" observedRunningTime="2026-01-31 04:59:03.25692992 +0000 UTC m=+952.205751605" watchObservedRunningTime="2026-01-31 04:59:03.258617943 +0000 UTC m=+952.207439628" Jan 31 04:59:03 crc kubenswrapper[4832]: I0131 04:59:03.303778 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" podStartSLOduration=27.303753355 podStartE2EDuration="27.303753355s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:59:03.29586842 +0000 UTC m=+952.244690105" watchObservedRunningTime="2026-01-31 04:59:03.303753355 +0000 UTC m=+952.252575040" Jan 31 04:59:04 crc kubenswrapper[4832]: I0131 04:59:04.032831 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" event={"ID":"fedc767a-c749-4373-84ab-c32673c34e40","Type":"ContainerStarted","Data":"8a259f04509b3bdbfd2ea86c695be0a89159407c9af7baab2d287b872326cd32"} Jan 31 04:59:04 crc kubenswrapper[4832]: I0131 04:59:04.038198 4832 generic.go:334] "Generic (PLEG): container finished" podID="871ed21f-5953-464f-9a56-e7b8597a586e" containerID="7595e34e5d86ad042aeb44a629f85d59fba952656d7998797826b6fdb4be2d39" exitCode=0 Jan 31 04:59:04 crc kubenswrapper[4832]: I0131 04:59:04.039111 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpg4j" event={"ID":"871ed21f-5953-464f-9a56-e7b8597a586e","Type":"ContainerDied","Data":"7595e34e5d86ad042aeb44a629f85d59fba952656d7998797826b6fdb4be2d39"} Jan 31 04:59:05 crc kubenswrapper[4832]: I0131 04:59:05.067209 4832 generic.go:334] "Generic (PLEG): container finished" podID="871ed21f-5953-464f-9a56-e7b8597a586e" containerID="2cb1b1e7af0aa16612f95c90c2e4e87c5fe8de5dc2df54e7208bd8f572dca2a7" exitCode=0 Jan 31 04:59:05 crc kubenswrapper[4832]: I0131 04:59:05.067290 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpg4j" event={"ID":"871ed21f-5953-464f-9a56-e7b8597a586e","Type":"ContainerDied","Data":"2cb1b1e7af0aa16612f95c90c2e4e87c5fe8de5dc2df54e7208bd8f572dca2a7"} Jan 31 04:59:06 crc kubenswrapper[4832]: I0131 04:59:06.484290 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zr7l4" Jan 31 04:59:06 crc kubenswrapper[4832]: I0131 04:59:06.506934 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5hl82" Jan 31 04:59:06 crc kubenswrapper[4832]: I0131 04:59:06.520227 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-gc6zx" Jan 31 04:59:06 crc kubenswrapper[4832]: I0131 04:59:06.689416 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-pnktz" Jan 31 04:59:06 crc kubenswrapper[4832]: I0131 04:59:06.713338 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-2hr2t" Jan 31 04:59:06 crc kubenswrapper[4832]: I0131 04:59:06.752176 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-4m8x2" Jan 31 04:59:06 crc kubenswrapper[4832]: I0131 04:59:06.908706 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-jtr9c" Jan 31 04:59:06 crc kubenswrapper[4832]: I0131 04:59:06.925687 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fdxks" Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.085004 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" event={"ID":"48118fb9-dcf4-45f5-8096-c558f980eab4","Type":"ContainerStarted","Data":"7bb8c98a4bbf091a443cfdad05fdfc41f7db4b85cd66d18b9efda872e077ad91"} Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.085148 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.086905 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8xwhl" event={"ID":"049ad615-904c-4043-b395-dd242e743140","Type":"ContainerStarted","Data":"290c8ca209358e4bfae557bfb4ae267dcf6c2407541288cd755855327a2b4af1"} Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.087216 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8xwhl" Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.088926 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" event={"ID":"f59551da-68de-4704-98fd-d9355e69c5af","Type":"ContainerStarted","Data":"84ef2a19634bac6d74057f2ebce1abb1c6b19c7183b04e23e0adada7156fcde3"} Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.089107 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.092297 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpg4j" event={"ID":"871ed21f-5953-464f-9a56-e7b8597a586e","Type":"ContainerStarted","Data":"f8dfacbb1ab33e2d670e178b61af90f7b99dc0a8fa665a11368e1f080ea863a2"} Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.112741 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" podStartSLOduration=27.04225491 podStartE2EDuration="31.112718546s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:59:01.864893867 +0000 UTC m=+950.813715552" lastFinishedPulling="2026-01-31 04:59:05.935357503 +0000 UTC m=+954.884179188" observedRunningTime="2026-01-31 04:59:07.10675235 +0000 UTC m=+956.055574035" watchObservedRunningTime="2026-01-31 04:59:07.112718546 +0000 UTC m=+956.061540231" Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.169955 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-t6wm8" Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.175628 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hpg4j" podStartSLOduration=18.256435607 podStartE2EDuration="21.17561105s" podCreationTimestamp="2026-01-31 04:58:46 +0000 UTC" firstStartedPulling="2026-01-31 04:59:03.016256912 +0000 UTC m=+951.965078587" lastFinishedPulling="2026-01-31 04:59:05.935432345 +0000 UTC m=+954.884254030" observedRunningTime="2026-01-31 04:59:07.14405651 +0000 UTC m=+956.092878205" watchObservedRunningTime="2026-01-31 04:59:07.17561105 +0000 UTC m=+956.124432735" Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.179461 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8xwhl" podStartSLOduration=2.856676957 podStartE2EDuration="31.179451229s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.025890594 +0000 UTC m=+926.974712279" lastFinishedPulling="2026-01-31 04:59:06.348664866 +0000 UTC m=+955.297486551" observedRunningTime="2026-01-31 04:59:07.17400329 +0000 UTC m=+956.122824975" watchObservedRunningTime="2026-01-31 04:59:07.179451229 +0000 UTC m=+956.128272914" Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.204595 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9ptdq" Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.213458 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" podStartSLOduration=27.017199042 podStartE2EDuration="31.213430645s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:59:01.771396473 +0000 UTC m=+950.720218158" lastFinishedPulling="2026-01-31 04:59:05.967628076 +0000 UTC m=+954.916449761" observedRunningTime="2026-01-31 04:59:07.199929166 +0000 UTC m=+956.148750851" watchObservedRunningTime="2026-01-31 04:59:07.213430645 +0000 UTC m=+956.162252330" Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.234398 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-m49v4" Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.550129 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r2nnw" Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.584506 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-2vw9x" Jan 31 04:59:07 crc kubenswrapper[4832]: I0131 04:59:07.597516 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-7t5cl" Jan 31 04:59:08 crc kubenswrapper[4832]: I0131 04:59:08.101629 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-c9pxb" event={"ID":"e7569d69-3ad6-4127-a49f-a16706a35099","Type":"ContainerStarted","Data":"1aa3453fa8dc7335c7859373f41de37bdbd25be277fff13fb424b6f00066f118"} Jan 31 04:59:08 crc kubenswrapper[4832]: I0131 04:59:08.129237 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-c9pxb" podStartSLOduration=3.277988019 podStartE2EDuration="32.12921235s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.470949813 +0000 UTC m=+927.419771498" lastFinishedPulling="2026-01-31 04:59:07.322174144 +0000 UTC m=+956.270995829" observedRunningTime="2026-01-31 04:59:08.120674345 +0000 UTC m=+957.069496030" watchObservedRunningTime="2026-01-31 04:59:08.12921235 +0000 UTC m=+957.078034035" Jan 31 04:59:09 crc kubenswrapper[4832]: I0131 04:59:09.110034 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-jl4p7" event={"ID":"0ac9e56c-c068-4fec-98d8-8d44e1fa6ccd","Type":"ContainerStarted","Data":"0beec287c7675abba8b182721b453934a1733c2c8bcd157831460e4cf183e4e0"} Jan 31 04:59:09 crc kubenswrapper[4832]: I0131 04:59:09.110686 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-jl4p7" Jan 31 04:59:09 crc kubenswrapper[4832]: I0131 04:59:09.111760 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jcrbt" event={"ID":"d86d3d02-d07f-4bf0-a01a-18652faa5111","Type":"ContainerStarted","Data":"f99e941b8ac2190ccea7f5de0b5c1aec1d92d6d7b9d3e44daebfbd21d538dd0f"} Jan 31 04:59:09 crc kubenswrapper[4832]: I0131 04:59:09.112008 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jcrbt" Jan 31 04:59:09 crc kubenswrapper[4832]: I0131 04:59:09.113230 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-4445f" event={"ID":"830a5967-5b56-4c70-8940-ef90cd945807","Type":"ContainerStarted","Data":"ccc78cfeb9628545f91a0abd4bf0be772c3ff75dd2a71cfa7aa008bcad7da904"} Jan 31 04:59:09 crc kubenswrapper[4832]: I0131 04:59:09.113412 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-4445f" Jan 31 04:59:09 crc kubenswrapper[4832]: I0131 04:59:09.136101 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-jl4p7" podStartSLOduration=3.120072092 podStartE2EDuration="33.136073995s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.254152326 +0000 UTC m=+927.202974011" lastFinishedPulling="2026-01-31 04:59:08.270154219 +0000 UTC m=+957.218975914" observedRunningTime="2026-01-31 04:59:09.131653228 +0000 UTC m=+958.080474923" watchObservedRunningTime="2026-01-31 04:59:09.136073995 +0000 UTC m=+958.084895690" Jan 31 04:59:09 crc kubenswrapper[4832]: I0131 04:59:09.163979 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-4445f" podStartSLOduration=2.959207003 podStartE2EDuration="33.163961791s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.062693347 +0000 UTC m=+927.011515032" lastFinishedPulling="2026-01-31 04:59:08.267448125 +0000 UTC m=+957.216269820" observedRunningTime="2026-01-31 04:59:09.159801862 +0000 UTC m=+958.108623547" watchObservedRunningTime="2026-01-31 04:59:09.163961791 +0000 UTC m=+958.112783476" Jan 31 04:59:09 crc kubenswrapper[4832]: I0131 04:59:09.184997 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jcrbt" podStartSLOduration=2.888141805 podStartE2EDuration="33.184973694s" podCreationTimestamp="2026-01-31 04:58:36 +0000 UTC" firstStartedPulling="2026-01-31 04:58:38.258484591 +0000 UTC m=+927.207306276" lastFinishedPulling="2026-01-31 04:59:08.55531648 +0000 UTC m=+957.504138165" observedRunningTime="2026-01-31 04:59:09.178641337 +0000 UTC m=+958.127463032" watchObservedRunningTime="2026-01-31 04:59:09.184973694 +0000 UTC m=+958.133795379" Jan 31 04:59:12 crc kubenswrapper[4832]: I0131 04:59:12.399974 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57997b5fcd-hjsbn" Jan 31 04:59:12 crc kubenswrapper[4832]: I0131 04:59:12.753882 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx" Jan 31 04:59:13 crc kubenswrapper[4832]: I0131 04:59:13.280913 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-68ffd75798-45z7h" Jan 31 04:59:16 crc kubenswrapper[4832]: I0131 04:59:16.788477 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-jl4p7" Jan 31 04:59:16 crc kubenswrapper[4832]: I0131 04:59:16.817528 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-4445f" Jan 31 04:59:16 crc kubenswrapper[4832]: I0131 04:59:16.909611 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:59:16 crc kubenswrapper[4832]: I0131 04:59:16.909674 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:59:16 crc kubenswrapper[4832]: I0131 04:59:16.968902 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:59:17 crc kubenswrapper[4832]: I0131 04:59:17.038894 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-jcrbt" Jan 31 04:59:17 crc kubenswrapper[4832]: I0131 04:59:17.062168 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-8xwhl" Jan 31 04:59:17 crc kubenswrapper[4832]: I0131 04:59:17.226022 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:59:17 crc kubenswrapper[4832]: I0131 04:59:17.233512 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-c9pxb" Jan 31 04:59:17 crc kubenswrapper[4832]: I0131 04:59:17.237422 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-c9pxb" Jan 31 04:59:17 crc kubenswrapper[4832]: I0131 04:59:17.760660 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpg4j"] Jan 31 04:59:19 crc kubenswrapper[4832]: I0131 04:59:19.193989 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hpg4j" podUID="871ed21f-5953-464f-9a56-e7b8597a586e" containerName="registry-server" containerID="cri-o://f8dfacbb1ab33e2d670e178b61af90f7b99dc0a8fa665a11368e1f080ea863a2" gracePeriod=2 Jan 31 04:59:21 crc kubenswrapper[4832]: I0131 04:59:21.213740 4832 generic.go:334] "Generic (PLEG): container finished" podID="871ed21f-5953-464f-9a56-e7b8597a586e" containerID="f8dfacbb1ab33e2d670e178b61af90f7b99dc0a8fa665a11368e1f080ea863a2" exitCode=0 Jan 31 04:59:21 crc kubenswrapper[4832]: I0131 04:59:21.213814 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpg4j" event={"ID":"871ed21f-5953-464f-9a56-e7b8597a586e","Type":"ContainerDied","Data":"f8dfacbb1ab33e2d670e178b61af90f7b99dc0a8fa665a11368e1f080ea863a2"} Jan 31 04:59:21 crc kubenswrapper[4832]: I0131 04:59:21.214709 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpg4j" event={"ID":"871ed21f-5953-464f-9a56-e7b8597a586e","Type":"ContainerDied","Data":"52a1210759fa07c4fafd8a48bd8a859e387712a6c47f29f1370c765803224285"} Jan 31 04:59:21 crc kubenswrapper[4832]: I0131 04:59:21.214730 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a1210759fa07c4fafd8a48bd8a859e387712a6c47f29f1370c765803224285" Jan 31 04:59:21 crc kubenswrapper[4832]: I0131 04:59:21.267640 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:59:21 crc kubenswrapper[4832]: I0131 04:59:21.338043 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/871ed21f-5953-464f-9a56-e7b8597a586e-utilities\") pod \"871ed21f-5953-464f-9a56-e7b8597a586e\" (UID: \"871ed21f-5953-464f-9a56-e7b8597a586e\") " Jan 31 04:59:21 crc kubenswrapper[4832]: I0131 04:59:21.338348 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr8cj\" (UniqueName: \"kubernetes.io/projected/871ed21f-5953-464f-9a56-e7b8597a586e-kube-api-access-kr8cj\") pod \"871ed21f-5953-464f-9a56-e7b8597a586e\" (UID: \"871ed21f-5953-464f-9a56-e7b8597a586e\") " Jan 31 04:59:21 crc kubenswrapper[4832]: I0131 04:59:21.338491 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/871ed21f-5953-464f-9a56-e7b8597a586e-catalog-content\") pod \"871ed21f-5953-464f-9a56-e7b8597a586e\" (UID: \"871ed21f-5953-464f-9a56-e7b8597a586e\") " Jan 31 04:59:21 crc kubenswrapper[4832]: I0131 04:59:21.339178 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/871ed21f-5953-464f-9a56-e7b8597a586e-utilities" (OuterVolumeSpecName: "utilities") pod "871ed21f-5953-464f-9a56-e7b8597a586e" (UID: "871ed21f-5953-464f-9a56-e7b8597a586e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:59:21 crc kubenswrapper[4832]: I0131 04:59:21.345683 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/871ed21f-5953-464f-9a56-e7b8597a586e-kube-api-access-kr8cj" (OuterVolumeSpecName: "kube-api-access-kr8cj") pod "871ed21f-5953-464f-9a56-e7b8597a586e" (UID: "871ed21f-5953-464f-9a56-e7b8597a586e"). InnerVolumeSpecName "kube-api-access-kr8cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:59:21 crc kubenswrapper[4832]: I0131 04:59:21.364283 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/871ed21f-5953-464f-9a56-e7b8597a586e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "871ed21f-5953-464f-9a56-e7b8597a586e" (UID: "871ed21f-5953-464f-9a56-e7b8597a586e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:59:21 crc kubenswrapper[4832]: I0131 04:59:21.440219 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/871ed21f-5953-464f-9a56-e7b8597a586e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:59:21 crc kubenswrapper[4832]: I0131 04:59:21.440267 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr8cj\" (UniqueName: \"kubernetes.io/projected/871ed21f-5953-464f-9a56-e7b8597a586e-kube-api-access-kr8cj\") on node \"crc\" DevicePath \"\"" Jan 31 04:59:21 crc kubenswrapper[4832]: I0131 04:59:21.440283 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/871ed21f-5953-464f-9a56-e7b8597a586e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:59:22 crc kubenswrapper[4832]: I0131 04:59:22.224104 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpg4j" Jan 31 04:59:22 crc kubenswrapper[4832]: I0131 04:59:22.259589 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpg4j"] Jan 31 04:59:22 crc kubenswrapper[4832]: I0131 04:59:22.273177 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpg4j"] Jan 31 04:59:23 crc kubenswrapper[4832]: I0131 04:59:23.879914 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="871ed21f-5953-464f-9a56-e7b8597a586e" path="/var/lib/kubelet/pods/871ed21f-5953-464f-9a56-e7b8597a586e/volumes" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.429809 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d846c"] Jan 31 04:59:33 crc kubenswrapper[4832]: E0131 04:59:33.431048 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871ed21f-5953-464f-9a56-e7b8597a586e" containerName="extract-content" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.431071 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="871ed21f-5953-464f-9a56-e7b8597a586e" containerName="extract-content" Jan 31 04:59:33 crc kubenswrapper[4832]: E0131 04:59:33.431083 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871ed21f-5953-464f-9a56-e7b8597a586e" containerName="extract-utilities" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.431090 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="871ed21f-5953-464f-9a56-e7b8597a586e" containerName="extract-utilities" Jan 31 04:59:33 crc kubenswrapper[4832]: E0131 04:59:33.431109 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871ed21f-5953-464f-9a56-e7b8597a586e" containerName="registry-server" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.431118 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="871ed21f-5953-464f-9a56-e7b8597a586e" containerName="registry-server" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.431299 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="871ed21f-5953-464f-9a56-e7b8597a586e" containerName="registry-server" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.438797 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d846c" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.442110 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-vfmtb" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.442695 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.442903 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.443430 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.451875 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d846c"] Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.544482 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l2bnf"] Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.546063 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.552772 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.560059 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l2bnf"] Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.596508 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/450881d0-31fd-49ec-b105-b4001cbccc16-config\") pod \"dnsmasq-dns-675f4bcbfc-d846c\" (UID: \"450881d0-31fd-49ec-b105-b4001cbccc16\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d846c" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.596592 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqv5d\" (UniqueName: \"kubernetes.io/projected/450881d0-31fd-49ec-b105-b4001cbccc16-kube-api-access-mqv5d\") pod \"dnsmasq-dns-675f4bcbfc-d846c\" (UID: \"450881d0-31fd-49ec-b105-b4001cbccc16\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d846c" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.698576 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca5112fd-b671-407b-b91e-0448e0355b53-config\") pod \"dnsmasq-dns-78dd6ddcc-l2bnf\" (UID: \"ca5112fd-b671-407b-b91e-0448e0355b53\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.698651 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/450881d0-31fd-49ec-b105-b4001cbccc16-config\") pod \"dnsmasq-dns-675f4bcbfc-d846c\" (UID: \"450881d0-31fd-49ec-b105-b4001cbccc16\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d846c" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.698676 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqv5d\" (UniqueName: \"kubernetes.io/projected/450881d0-31fd-49ec-b105-b4001cbccc16-kube-api-access-mqv5d\") pod \"dnsmasq-dns-675f4bcbfc-d846c\" (UID: \"450881d0-31fd-49ec-b105-b4001cbccc16\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d846c" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.698887 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca5112fd-b671-407b-b91e-0448e0355b53-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-l2bnf\" (UID: \"ca5112fd-b671-407b-b91e-0448e0355b53\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.699010 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrzfp\" (UniqueName: \"kubernetes.io/projected/ca5112fd-b671-407b-b91e-0448e0355b53-kube-api-access-xrzfp\") pod \"dnsmasq-dns-78dd6ddcc-l2bnf\" (UID: \"ca5112fd-b671-407b-b91e-0448e0355b53\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.700052 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/450881d0-31fd-49ec-b105-b4001cbccc16-config\") pod \"dnsmasq-dns-675f4bcbfc-d846c\" (UID: \"450881d0-31fd-49ec-b105-b4001cbccc16\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d846c" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.732956 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqv5d\" (UniqueName: \"kubernetes.io/projected/450881d0-31fd-49ec-b105-b4001cbccc16-kube-api-access-mqv5d\") pod \"dnsmasq-dns-675f4bcbfc-d846c\" (UID: \"450881d0-31fd-49ec-b105-b4001cbccc16\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d846c" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.759548 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d846c" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.801361 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca5112fd-b671-407b-b91e-0448e0355b53-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-l2bnf\" (UID: \"ca5112fd-b671-407b-b91e-0448e0355b53\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.801509 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrzfp\" (UniqueName: \"kubernetes.io/projected/ca5112fd-b671-407b-b91e-0448e0355b53-kube-api-access-xrzfp\") pod \"dnsmasq-dns-78dd6ddcc-l2bnf\" (UID: \"ca5112fd-b671-407b-b91e-0448e0355b53\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.801832 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca5112fd-b671-407b-b91e-0448e0355b53-config\") pod \"dnsmasq-dns-78dd6ddcc-l2bnf\" (UID: \"ca5112fd-b671-407b-b91e-0448e0355b53\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.802298 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca5112fd-b671-407b-b91e-0448e0355b53-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-l2bnf\" (UID: \"ca5112fd-b671-407b-b91e-0448e0355b53\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.803124 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca5112fd-b671-407b-b91e-0448e0355b53-config\") pod \"dnsmasq-dns-78dd6ddcc-l2bnf\" (UID: \"ca5112fd-b671-407b-b91e-0448e0355b53\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.827747 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrzfp\" (UniqueName: \"kubernetes.io/projected/ca5112fd-b671-407b-b91e-0448e0355b53-kube-api-access-xrzfp\") pod \"dnsmasq-dns-78dd6ddcc-l2bnf\" (UID: \"ca5112fd-b671-407b-b91e-0448e0355b53\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" Jan 31 04:59:33 crc kubenswrapper[4832]: I0131 04:59:33.861156 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" Jan 31 04:59:34 crc kubenswrapper[4832]: I0131 04:59:34.271103 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d846c"] Jan 31 04:59:34 crc kubenswrapper[4832]: I0131 04:59:34.280259 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:59:34 crc kubenswrapper[4832]: I0131 04:59:34.337316 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-d846c" event={"ID":"450881d0-31fd-49ec-b105-b4001cbccc16","Type":"ContainerStarted","Data":"1cf88c0c2637caac3c1196e5518e55ccbd24d6ff337be35fbe569728d56a4f00"} Jan 31 04:59:34 crc kubenswrapper[4832]: I0131 04:59:34.378096 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l2bnf"] Jan 31 04:59:35 crc kubenswrapper[4832]: I0131 04:59:35.352459 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" event={"ID":"ca5112fd-b671-407b-b91e-0448e0355b53","Type":"ContainerStarted","Data":"72925228b1dd59583ff80a45e6bf344ad3c723f002bf0d39425e7e826890486b"} Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.167806 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d846c"] Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.199159 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dh7zz"] Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.200746 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.217315 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dh7zz"] Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.351974 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8f429c0-20b3-4046-aa0c-946465b934d1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-dh7zz\" (UID: \"a8f429c0-20b3-4046-aa0c-946465b934d1\") " pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.352364 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh58m\" (UniqueName: \"kubernetes.io/projected/a8f429c0-20b3-4046-aa0c-946465b934d1-kube-api-access-zh58m\") pod \"dnsmasq-dns-666b6646f7-dh7zz\" (UID: \"a8f429c0-20b3-4046-aa0c-946465b934d1\") " pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.352528 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f429c0-20b3-4046-aa0c-946465b934d1-config\") pod \"dnsmasq-dns-666b6646f7-dh7zz\" (UID: \"a8f429c0-20b3-4046-aa0c-946465b934d1\") " pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.454472 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f429c0-20b3-4046-aa0c-946465b934d1-config\") pod \"dnsmasq-dns-666b6646f7-dh7zz\" (UID: \"a8f429c0-20b3-4046-aa0c-946465b934d1\") " pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.454539 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8f429c0-20b3-4046-aa0c-946465b934d1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-dh7zz\" (UID: \"a8f429c0-20b3-4046-aa0c-946465b934d1\") " pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.454618 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh58m\" (UniqueName: \"kubernetes.io/projected/a8f429c0-20b3-4046-aa0c-946465b934d1-kube-api-access-zh58m\") pod \"dnsmasq-dns-666b6646f7-dh7zz\" (UID: \"a8f429c0-20b3-4046-aa0c-946465b934d1\") " pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.456875 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f429c0-20b3-4046-aa0c-946465b934d1-config\") pod \"dnsmasq-dns-666b6646f7-dh7zz\" (UID: \"a8f429c0-20b3-4046-aa0c-946465b934d1\") " pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.456865 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8f429c0-20b3-4046-aa0c-946465b934d1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-dh7zz\" (UID: \"a8f429c0-20b3-4046-aa0c-946465b934d1\") " pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.497155 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh58m\" (UniqueName: \"kubernetes.io/projected/a8f429c0-20b3-4046-aa0c-946465b934d1-kube-api-access-zh58m\") pod \"dnsmasq-dns-666b6646f7-dh7zz\" (UID: \"a8f429c0-20b3-4046-aa0c-946465b934d1\") " pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.506946 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l2bnf"] Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.547389 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4bfxq"] Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.548609 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.556241 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.559259 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4bfxq"] Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.668591 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52c1e5bd-18f2-4115-9ac3-d51a7513272e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4bfxq\" (UID: \"52c1e5bd-18f2-4115-9ac3-d51a7513272e\") " pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.668674 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c1e5bd-18f2-4115-9ac3-d51a7513272e-config\") pod \"dnsmasq-dns-57d769cc4f-4bfxq\" (UID: \"52c1e5bd-18f2-4115-9ac3-d51a7513272e\") " pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.668747 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpshx\" (UniqueName: \"kubernetes.io/projected/52c1e5bd-18f2-4115-9ac3-d51a7513272e-kube-api-access-tpshx\") pod \"dnsmasq-dns-57d769cc4f-4bfxq\" (UID: \"52c1e5bd-18f2-4115-9ac3-d51a7513272e\") " pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.770716 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpshx\" (UniqueName: \"kubernetes.io/projected/52c1e5bd-18f2-4115-9ac3-d51a7513272e-kube-api-access-tpshx\") pod \"dnsmasq-dns-57d769cc4f-4bfxq\" (UID: \"52c1e5bd-18f2-4115-9ac3-d51a7513272e\") " pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.771292 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52c1e5bd-18f2-4115-9ac3-d51a7513272e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4bfxq\" (UID: \"52c1e5bd-18f2-4115-9ac3-d51a7513272e\") " pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.771334 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c1e5bd-18f2-4115-9ac3-d51a7513272e-config\") pod \"dnsmasq-dns-57d769cc4f-4bfxq\" (UID: \"52c1e5bd-18f2-4115-9ac3-d51a7513272e\") " pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.773429 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c1e5bd-18f2-4115-9ac3-d51a7513272e-config\") pod \"dnsmasq-dns-57d769cc4f-4bfxq\" (UID: \"52c1e5bd-18f2-4115-9ac3-d51a7513272e\") " pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.774879 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52c1e5bd-18f2-4115-9ac3-d51a7513272e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4bfxq\" (UID: \"52c1e5bd-18f2-4115-9ac3-d51a7513272e\") " pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.793674 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpshx\" (UniqueName: \"kubernetes.io/projected/52c1e5bd-18f2-4115-9ac3-d51a7513272e-kube-api-access-tpshx\") pod \"dnsmasq-dns-57d769cc4f-4bfxq\" (UID: \"52c1e5bd-18f2-4115-9ac3-d51a7513272e\") " pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" Jan 31 04:59:36 crc kubenswrapper[4832]: I0131 04:59:36.890125 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.278661 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dh7zz"] Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.370227 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.371769 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.374499 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-dg6kd" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.375051 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.375302 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.375438 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.385340 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.386179 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.386440 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.411715 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" event={"ID":"a8f429c0-20b3-4046-aa0c-946465b934d1","Type":"ContainerStarted","Data":"c7688c2a0b09c2d959b847ec4769214679ccbcc68f1df38cadd6e7629262dff2"} Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.431607 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.500404 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-config-data\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.500586 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.500662 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.500736 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hx9s\" (UniqueName: \"kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-kube-api-access-8hx9s\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.500825 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.500849 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9a976894-0f59-4fb5-a297-c43c1bf88b47-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.500921 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.500943 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9a976894-0f59-4fb5-a297-c43c1bf88b47-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.500978 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.501078 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.501128 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.535256 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4bfxq"] Jan 31 04:59:37 crc kubenswrapper[4832]: W0131 04:59:37.544738 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52c1e5bd_18f2_4115_9ac3_d51a7513272e.slice/crio-10930aa85ace25b0d9ff0362aae5c0656cee0414f49fec0878290cf85fb2c1a4 WatchSource:0}: Error finding container 10930aa85ace25b0d9ff0362aae5c0656cee0414f49fec0878290cf85fb2c1a4: Status 404 returned error can't find the container with id 10930aa85ace25b0d9ff0362aae5c0656cee0414f49fec0878290cf85fb2c1a4 Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.603993 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.604647 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-config-data\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.604694 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.604716 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.604747 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hx9s\" (UniqueName: \"kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-kube-api-access-8hx9s\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.604781 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.604800 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9a976894-0f59-4fb5-a297-c43c1bf88b47-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.604833 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.604853 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9a976894-0f59-4fb5-a297-c43c1bf88b47-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.604872 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.604900 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.605298 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.605841 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-config-data\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.605913 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.608168 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.636248 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.637071 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.646663 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9a976894-0f59-4fb5-a297-c43c1bf88b47-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.646686 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.646790 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.649164 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hx9s\" (UniqueName: \"kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-kube-api-access-8hx9s\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.672258 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9a976894-0f59-4fb5-a297-c43c1bf88b47-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.683971 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.717071 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.718607 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.726412 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.726620 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.726860 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.726977 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.727091 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qxj4b" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.727229 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.740847 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.749440 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.740456 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.810916 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.810986 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.811017 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.811063 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.811085 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.811116 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.811155 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.811189 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.811223 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcc8n\" (UniqueName: \"kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-kube-api-access-tcc8n\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.811251 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.811270 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.922876 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.922958 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.922989 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.923022 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcc8n\" (UniqueName: \"kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-kube-api-access-tcc8n\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.923047 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.923066 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.923118 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.923138 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.923154 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.923191 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.923210 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.924349 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.924466 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.925496 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.927794 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.927942 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.928862 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.942719 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.953414 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.961750 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.965032 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcc8n\" (UniqueName: \"kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-kube-api-access-tcc8n\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:37 crc kubenswrapper[4832]: I0131 04:59:37.971332 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:38 crc kubenswrapper[4832]: I0131 04:59:38.018641 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:38 crc kubenswrapper[4832]: I0131 04:59:38.089325 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:59:38 crc kubenswrapper[4832]: I0131 04:59:38.363899 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:59:38 crc kubenswrapper[4832]: W0131 04:59:38.386395 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a976894_0f59_4fb5_a297_c43c1bf88b47.slice/crio-82fe0d11acd6a4abda4ea2049e58853aeb63be2118775cfee3a83dc3ac4e37a8 WatchSource:0}: Error finding container 82fe0d11acd6a4abda4ea2049e58853aeb63be2118775cfee3a83dc3ac4e37a8: Status 404 returned error can't find the container with id 82fe0d11acd6a4abda4ea2049e58853aeb63be2118775cfee3a83dc3ac4e37a8 Jan 31 04:59:38 crc kubenswrapper[4832]: I0131 04:59:38.448901 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" event={"ID":"52c1e5bd-18f2-4115-9ac3-d51a7513272e","Type":"ContainerStarted","Data":"10930aa85ace25b0d9ff0362aae5c0656cee0414f49fec0878290cf85fb2c1a4"} Jan 31 04:59:38 crc kubenswrapper[4832]: I0131 04:59:38.458355 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9a976894-0f59-4fb5-a297-c43c1bf88b47","Type":"ContainerStarted","Data":"82fe0d11acd6a4abda4ea2049e58853aeb63be2118775cfee3a83dc3ac4e37a8"} Jan 31 04:59:38 crc kubenswrapper[4832]: I0131 04:59:38.460020 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:59:38 crc kubenswrapper[4832]: W0131 04:59:38.462142 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4e0df3a_5b8c_43ad_b404_5a9716f774a6.slice/crio-93e16f1eabd9e2d6fca7a4b9ea3cbb19bd6a35f81c3883d006f5e0cf6eba25f6 WatchSource:0}: Error finding container 93e16f1eabd9e2d6fca7a4b9ea3cbb19bd6a35f81c3883d006f5e0cf6eba25f6: Status 404 returned error can't find the container with id 93e16f1eabd9e2d6fca7a4b9ea3cbb19bd6a35f81c3883d006f5e0cf6eba25f6 Jan 31 04:59:38 crc kubenswrapper[4832]: I0131 04:59:38.949378 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 31 04:59:38 crc kubenswrapper[4832]: I0131 04:59:38.951023 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 04:59:38 crc kubenswrapper[4832]: I0131 04:59:38.956775 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 31 04:59:38 crc kubenswrapper[4832]: I0131 04:59:38.957306 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 31 04:59:38 crc kubenswrapper[4832]: I0131 04:59:38.957507 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-qh6tq" Jan 31 04:59:38 crc kubenswrapper[4832]: I0131 04:59:38.959000 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 31 04:59:38 crc kubenswrapper[4832]: I0131 04:59:38.966252 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 31 04:59:38 crc kubenswrapper[4832]: I0131 04:59:38.988414 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.069368 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.069449 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n57v\" (UniqueName: \"kubernetes.io/projected/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-kube-api-access-6n57v\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.069478 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.069519 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.069759 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-config-data-default\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.069905 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-kolla-config\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.070004 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.070083 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.172825 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.172945 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n57v\" (UniqueName: \"kubernetes.io/projected/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-kube-api-access-6n57v\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.172992 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.173034 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.173071 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-config-data-default\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.173124 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-kolla-config\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.173151 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.173175 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.173829 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.176469 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.177144 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-kolla-config\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.177935 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.178360 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-config-data-default\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.185041 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.190414 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.199975 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n57v\" (UniqueName: \"kubernetes.io/projected/b9bfe69c-78b0-4982-b9ab-7aa41bd071ec-kube-api-access-6n57v\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.218688 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec\") " pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.285127 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 04:59:39 crc kubenswrapper[4832]: I0131 04:59:39.490900 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e4e0df3a-5b8c-43ad-b404-5a9716f774a6","Type":"ContainerStarted","Data":"93e16f1eabd9e2d6fca7a4b9ea3cbb19bd6a35f81c3883d006f5e0cf6eba25f6"} Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.065146 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.066548 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.072729 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.072726 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.072992 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-7vzk9" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.073008 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.097088 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.200415 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.200468 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.200487 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.200526 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.200541 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.200574 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.200601 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m8x2\" (UniqueName: \"kubernetes.io/projected/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-kube-api-access-5m8x2\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.200641 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.304175 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m8x2\" (UniqueName: \"kubernetes.io/projected/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-kube-api-access-5m8x2\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.304292 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.304366 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.304394 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.304428 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.304485 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.304514 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.304543 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.304952 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.306150 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.307983 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.311900 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.312220 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.319445 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.323185 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.326308 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m8x2\" (UniqueName: \"kubernetes.io/projected/c6177d8c-3ae2-4aee-87ac-eefdc96806e6-kube-api-access-5m8x2\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.337006 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c6177d8c-3ae2-4aee-87ac-eefdc96806e6\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.390990 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.477416 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.479713 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.483897 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-4krk9" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.483910 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.484440 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.495172 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.609476 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86b5161e-fa9c-4b0d-9549-2ab191b90e33-config-data\") pod \"memcached-0\" (UID: \"86b5161e-fa9c-4b0d-9549-2ab191b90e33\") " pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.609532 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/86b5161e-fa9c-4b0d-9549-2ab191b90e33-memcached-tls-certs\") pod \"memcached-0\" (UID: \"86b5161e-fa9c-4b0d-9549-2ab191b90e33\") " pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.609591 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/86b5161e-fa9c-4b0d-9549-2ab191b90e33-kolla-config\") pod \"memcached-0\" (UID: \"86b5161e-fa9c-4b0d-9549-2ab191b90e33\") " pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.609650 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b5161e-fa9c-4b0d-9549-2ab191b90e33-combined-ca-bundle\") pod \"memcached-0\" (UID: \"86b5161e-fa9c-4b0d-9549-2ab191b90e33\") " pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.609669 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tdt5\" (UniqueName: \"kubernetes.io/projected/86b5161e-fa9c-4b0d-9549-2ab191b90e33-kube-api-access-6tdt5\") pod \"memcached-0\" (UID: \"86b5161e-fa9c-4b0d-9549-2ab191b90e33\") " pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.711921 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b5161e-fa9c-4b0d-9549-2ab191b90e33-combined-ca-bundle\") pod \"memcached-0\" (UID: \"86b5161e-fa9c-4b0d-9549-2ab191b90e33\") " pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.711963 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tdt5\" (UniqueName: \"kubernetes.io/projected/86b5161e-fa9c-4b0d-9549-2ab191b90e33-kube-api-access-6tdt5\") pod \"memcached-0\" (UID: \"86b5161e-fa9c-4b0d-9549-2ab191b90e33\") " pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.712028 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86b5161e-fa9c-4b0d-9549-2ab191b90e33-config-data\") pod \"memcached-0\" (UID: \"86b5161e-fa9c-4b0d-9549-2ab191b90e33\") " pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.712053 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/86b5161e-fa9c-4b0d-9549-2ab191b90e33-memcached-tls-certs\") pod \"memcached-0\" (UID: \"86b5161e-fa9c-4b0d-9549-2ab191b90e33\") " pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.712093 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/86b5161e-fa9c-4b0d-9549-2ab191b90e33-kolla-config\") pod \"memcached-0\" (UID: \"86b5161e-fa9c-4b0d-9549-2ab191b90e33\") " pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.712975 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/86b5161e-fa9c-4b0d-9549-2ab191b90e33-kolla-config\") pod \"memcached-0\" (UID: \"86b5161e-fa9c-4b0d-9549-2ab191b90e33\") " pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.714017 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86b5161e-fa9c-4b0d-9549-2ab191b90e33-config-data\") pod \"memcached-0\" (UID: \"86b5161e-fa9c-4b0d-9549-2ab191b90e33\") " pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.718260 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/86b5161e-fa9c-4b0d-9549-2ab191b90e33-memcached-tls-certs\") pod \"memcached-0\" (UID: \"86b5161e-fa9c-4b0d-9549-2ab191b90e33\") " pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.718458 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b5161e-fa9c-4b0d-9549-2ab191b90e33-combined-ca-bundle\") pod \"memcached-0\" (UID: \"86b5161e-fa9c-4b0d-9549-2ab191b90e33\") " pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.732286 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tdt5\" (UniqueName: \"kubernetes.io/projected/86b5161e-fa9c-4b0d-9549-2ab191b90e33-kube-api-access-6tdt5\") pod \"memcached-0\" (UID: \"86b5161e-fa9c-4b0d-9549-2ab191b90e33\") " pod="openstack/memcached-0" Jan 31 04:59:40 crc kubenswrapper[4832]: I0131 04:59:40.803055 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 04:59:42 crc kubenswrapper[4832]: I0131 04:59:42.527446 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:59:42 crc kubenswrapper[4832]: I0131 04:59:42.532271 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 04:59:42 crc kubenswrapper[4832]: I0131 04:59:42.537527 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mmdmr" Jan 31 04:59:42 crc kubenswrapper[4832]: I0131 04:59:42.550276 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:59:42 crc kubenswrapper[4832]: I0131 04:59:42.594171 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbgk9\" (UniqueName: \"kubernetes.io/projected/ce8cc6a9-9cd5-410b-93df-1f132291a0ea-kube-api-access-hbgk9\") pod \"kube-state-metrics-0\" (UID: \"ce8cc6a9-9cd5-410b-93df-1f132291a0ea\") " pod="openstack/kube-state-metrics-0" Jan 31 04:59:42 crc kubenswrapper[4832]: I0131 04:59:42.696247 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbgk9\" (UniqueName: \"kubernetes.io/projected/ce8cc6a9-9cd5-410b-93df-1f132291a0ea-kube-api-access-hbgk9\") pod \"kube-state-metrics-0\" (UID: \"ce8cc6a9-9cd5-410b-93df-1f132291a0ea\") " pod="openstack/kube-state-metrics-0" Jan 31 04:59:42 crc kubenswrapper[4832]: I0131 04:59:42.720784 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbgk9\" (UniqueName: \"kubernetes.io/projected/ce8cc6a9-9cd5-410b-93df-1f132291a0ea-kube-api-access-hbgk9\") pod \"kube-state-metrics-0\" (UID: \"ce8cc6a9-9cd5-410b-93df-1f132291a0ea\") " pod="openstack/kube-state-metrics-0" Jan 31 04:59:42 crc kubenswrapper[4832]: I0131 04:59:42.861279 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.201698 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8sq59"] Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.203503 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.206918 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.207167 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.208817 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-fh4jl" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.208976 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-nmcpt"] Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.210963 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.217393 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8sq59"] Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.248759 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-nmcpt"] Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.282932 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.283911 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/103522f1-37d5-48e1-8004-ab58b154d040-ovn-controller-tls-certs\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.283965 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103522f1-37d5-48e1-8004-ab58b154d040-combined-ca-bundle\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.283988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/103522f1-37d5-48e1-8004-ab58b154d040-var-run-ovn\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.284055 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-etc-ovs\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.284086 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-var-lib\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.284103 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/103522f1-37d5-48e1-8004-ab58b154d040-var-run\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.284121 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md4f5\" (UniqueName: \"kubernetes.io/projected/103522f1-37d5-48e1-8004-ab58b154d040-kube-api-access-md4f5\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.284175 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/103522f1-37d5-48e1-8004-ab58b154d040-scripts\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.284199 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-var-run\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.284220 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-scripts\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.284237 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbkfp\" (UniqueName: \"kubernetes.io/projected/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-kube-api-access-nbkfp\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.284259 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/103522f1-37d5-48e1-8004-ab58b154d040-var-log-ovn\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.284300 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-var-log\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.284663 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.289047 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.289255 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.289343 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.289490 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.289778 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-qm6dj" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.289842 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.386576 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05c379c-cf2f-4179-a902-475d2a555294-config\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.386658 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-var-log\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.386696 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b05c379c-cf2f-4179-a902-475d2a555294-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.386722 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/103522f1-37d5-48e1-8004-ab58b154d040-ovn-controller-tls-certs\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.386748 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103522f1-37d5-48e1-8004-ab58b154d040-combined-ca-bundle\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.386772 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/103522f1-37d5-48e1-8004-ab58b154d040-var-run-ovn\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.386801 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-etc-ovs\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.386818 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hbqw\" (UniqueName: \"kubernetes.io/projected/b05c379c-cf2f-4179-a902-475d2a555294-kube-api-access-7hbqw\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.386846 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05c379c-cf2f-4179-a902-475d2a555294-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.386870 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-var-lib\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.386890 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/103522f1-37d5-48e1-8004-ab58b154d040-var-run\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.386910 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md4f5\" (UniqueName: \"kubernetes.io/projected/103522f1-37d5-48e1-8004-ab58b154d040-kube-api-access-md4f5\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.386937 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05c379c-cf2f-4179-a902-475d2a555294-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.386968 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/103522f1-37d5-48e1-8004-ab58b154d040-scripts\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.386991 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-var-run\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.387008 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b05c379c-cf2f-4179-a902-475d2a555294-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.387072 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-scripts\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.387093 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05c379c-cf2f-4179-a902-475d2a555294-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.387146 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbkfp\" (UniqueName: \"kubernetes.io/projected/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-kube-api-access-nbkfp\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.387173 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/103522f1-37d5-48e1-8004-ab58b154d040-var-log-ovn\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.387209 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-var-log\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.387263 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.387367 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/103522f1-37d5-48e1-8004-ab58b154d040-var-run\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.387740 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-var-run\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.387861 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-etc-ovs\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.388243 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-var-lib\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.388321 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/103522f1-37d5-48e1-8004-ab58b154d040-var-run-ovn\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.388397 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/103522f1-37d5-48e1-8004-ab58b154d040-var-log-ovn\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.392955 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/103522f1-37d5-48e1-8004-ab58b154d040-scripts\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.394361 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103522f1-37d5-48e1-8004-ab58b154d040-combined-ca-bundle\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.396356 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-scripts\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.405635 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md4f5\" (UniqueName: \"kubernetes.io/projected/103522f1-37d5-48e1-8004-ab58b154d040-kube-api-access-md4f5\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.407778 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbkfp\" (UniqueName: \"kubernetes.io/projected/61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f-kube-api-access-nbkfp\") pod \"ovn-controller-ovs-nmcpt\" (UID: \"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f\") " pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.416720 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/103522f1-37d5-48e1-8004-ab58b154d040-ovn-controller-tls-certs\") pod \"ovn-controller-8sq59\" (UID: \"103522f1-37d5-48e1-8004-ab58b154d040\") " pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.488525 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b05c379c-cf2f-4179-a902-475d2a555294-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.488607 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05c379c-cf2f-4179-a902-475d2a555294-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.488652 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.488680 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05c379c-cf2f-4179-a902-475d2a555294-config\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.488718 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b05c379c-cf2f-4179-a902-475d2a555294-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.488750 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hbqw\" (UniqueName: \"kubernetes.io/projected/b05c379c-cf2f-4179-a902-475d2a555294-kube-api-access-7hbqw\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.488781 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05c379c-cf2f-4179-a902-475d2a555294-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.488810 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05c379c-cf2f-4179-a902-475d2a555294-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.489218 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.489862 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05c379c-cf2f-4179-a902-475d2a555294-config\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.502439 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05c379c-cf2f-4179-a902-475d2a555294-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.502680 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b05c379c-cf2f-4179-a902-475d2a555294-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.502941 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05c379c-cf2f-4179-a902-475d2a555294-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.503348 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05c379c-cf2f-4179-a902-475d2a555294-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.517317 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b05c379c-cf2f-4179-a902-475d2a555294-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.517841 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hbqw\" (UniqueName: \"kubernetes.io/projected/b05c379c-cf2f-4179-a902-475d2a555294-kube-api-access-7hbqw\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.547991 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b05c379c-cf2f-4179-a902-475d2a555294\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.553740 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8sq59" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.572241 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 04:59:46 crc kubenswrapper[4832]: I0131 04:59:46.603351 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.799934 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.801755 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.808588 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.810735 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-pq479" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.811035 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.811202 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.811262 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.868625 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.869015 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e2edd879-2e11-41b2-872a-1f50cf71719f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.869052 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2edd879-2e11-41b2-872a-1f50cf71719f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.869077 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2edd879-2e11-41b2-872a-1f50cf71719f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.869115 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2edd879-2e11-41b2-872a-1f50cf71719f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.869177 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2edd879-2e11-41b2-872a-1f50cf71719f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.869201 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2edd879-2e11-41b2-872a-1f50cf71719f-config\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.869410 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmwtp\" (UniqueName: \"kubernetes.io/projected/e2edd879-2e11-41b2-872a-1f50cf71719f-kube-api-access-rmwtp\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.971512 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2edd879-2e11-41b2-872a-1f50cf71719f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.971597 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2edd879-2e11-41b2-872a-1f50cf71719f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.971626 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2edd879-2e11-41b2-872a-1f50cf71719f-config\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.971716 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmwtp\" (UniqueName: \"kubernetes.io/projected/e2edd879-2e11-41b2-872a-1f50cf71719f-kube-api-access-rmwtp\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.971790 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.971815 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e2edd879-2e11-41b2-872a-1f50cf71719f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.971852 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2edd879-2e11-41b2-872a-1f50cf71719f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.971877 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2edd879-2e11-41b2-872a-1f50cf71719f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.972853 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2edd879-2e11-41b2-872a-1f50cf71719f-config\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.972954 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.973265 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e2edd879-2e11-41b2-872a-1f50cf71719f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.974960 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2edd879-2e11-41b2-872a-1f50cf71719f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.979827 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2edd879-2e11-41b2-872a-1f50cf71719f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.980578 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2edd879-2e11-41b2-872a-1f50cf71719f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.980972 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2edd879-2e11-41b2-872a-1f50cf71719f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.993927 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmwtp\" (UniqueName: \"kubernetes.io/projected/e2edd879-2e11-41b2-872a-1f50cf71719f-kube-api-access-rmwtp\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:49 crc kubenswrapper[4832]: I0131 04:59:49.997764 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e2edd879-2e11-41b2-872a-1f50cf71719f\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:50 crc kubenswrapper[4832]: I0131 04:59:50.135726 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 04:59:55 crc kubenswrapper[4832]: E0131 04:59:55.057453 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 04:59:55 crc kubenswrapper[4832]: E0131 04:59:55.058897 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zh58m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-dh7zz_openstack(a8f429c0-20b3-4046-aa0c-946465b934d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:59:55 crc kubenswrapper[4832]: E0131 04:59:55.060552 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" podUID="a8f429c0-20b3-4046-aa0c-946465b934d1" Jan 31 04:59:55 crc kubenswrapper[4832]: I0131 04:59:55.265518 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:59:55 crc kubenswrapper[4832]: E0131 04:59:55.658469 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" podUID="a8f429c0-20b3-4046-aa0c-946465b934d1" Jan 31 04:59:56 crc kubenswrapper[4832]: W0131 04:59:56.169993 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce8cc6a9_9cd5_410b_93df_1f132291a0ea.slice/crio-703e5844cfd047cb33b6877290656c5a77f5e6b2fb99f5b1a799ce6152289964 WatchSource:0}: Error finding container 703e5844cfd047cb33b6877290656c5a77f5e6b2fb99f5b1a799ce6152289964: Status 404 returned error can't find the container with id 703e5844cfd047cb33b6877290656c5a77f5e6b2fb99f5b1a799ce6152289964 Jan 31 04:59:56 crc kubenswrapper[4832]: E0131 04:59:56.207464 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 04:59:56 crc kubenswrapper[4832]: E0131 04:59:56.207674 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tpshx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-4bfxq_openstack(52c1e5bd-18f2-4115-9ac3-d51a7513272e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:59:56 crc kubenswrapper[4832]: E0131 04:59:56.209068 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" podUID="52c1e5bd-18f2-4115-9ac3-d51a7513272e" Jan 31 04:59:56 crc kubenswrapper[4832]: E0131 04:59:56.211579 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 04:59:56 crc kubenswrapper[4832]: E0131 04:59:56.212986 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xrzfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-l2bnf_openstack(ca5112fd-b671-407b-b91e-0448e0355b53): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:59:56 crc kubenswrapper[4832]: E0131 04:59:56.214184 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" podUID="ca5112fd-b671-407b-b91e-0448e0355b53" Jan 31 04:59:56 crc kubenswrapper[4832]: E0131 04:59:56.265837 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 04:59:56 crc kubenswrapper[4832]: E0131 04:59:56.266287 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqv5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-d846c_openstack(450881d0-31fd-49ec-b105-b4001cbccc16): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:59:56 crc kubenswrapper[4832]: E0131 04:59:56.267552 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-d846c" podUID="450881d0-31fd-49ec-b105-b4001cbccc16" Jan 31 04:59:56 crc kubenswrapper[4832]: I0131 04:59:56.662251 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 04:59:56 crc kubenswrapper[4832]: I0131 04:59:56.671835 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ce8cc6a9-9cd5-410b-93df-1f132291a0ea","Type":"ContainerStarted","Data":"703e5844cfd047cb33b6877290656c5a77f5e6b2fb99f5b1a799ce6152289964"} Jan 31 04:59:56 crc kubenswrapper[4832]: E0131 04:59:56.673786 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" podUID="52c1e5bd-18f2-4115-9ac3-d51a7513272e" Jan 31 04:59:56 crc kubenswrapper[4832]: I0131 04:59:56.848283 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 04:59:56 crc kubenswrapper[4832]: I0131 04:59:56.868224 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8sq59"] Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.001031 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 04:59:57 crc kubenswrapper[4832]: W0131 04:59:57.047642 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6177d8c_3ae2_4aee_87ac_eefdc96806e6.slice/crio-601d99a35115b9b50260a9b9676dc9034ffb15a20a9de9a8a81218a81633ba4d WatchSource:0}: Error finding container 601d99a35115b9b50260a9b9676dc9034ffb15a20a9de9a8a81218a81633ba4d: Status 404 returned error can't find the container with id 601d99a35115b9b50260a9b9676dc9034ffb15a20a9de9a8a81218a81633ba4d Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.108251 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-nmcpt"] Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.202104 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 04:59:57 crc kubenswrapper[4832]: W0131 04:59:57.237360 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61b751c2_a6a2_4d2f_aa98_5ac3dff4fb4f.slice/crio-81f5b14ee6193920973e3da40dfaa8f7fc7be3f55eda6df96be87d9c15036375 WatchSource:0}: Error finding container 81f5b14ee6193920973e3da40dfaa8f7fc7be3f55eda6df96be87d9c15036375: Status 404 returned error can't find the container with id 81f5b14ee6193920973e3da40dfaa8f7fc7be3f55eda6df96be87d9c15036375 Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.681043 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" event={"ID":"ca5112fd-b671-407b-b91e-0448e0355b53","Type":"ContainerDied","Data":"72925228b1dd59583ff80a45e6bf344ad3c723f002bf0d39425e7e826890486b"} Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.681491 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72925228b1dd59583ff80a45e6bf344ad3c723f002bf0d39425e7e826890486b" Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.682817 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"86b5161e-fa9c-4b0d-9549-2ab191b90e33","Type":"ContainerStarted","Data":"d24f23be19832abac17d8da0dfb12945e65626193387e4796ce166e6db4e1722"} Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.684106 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8sq59" event={"ID":"103522f1-37d5-48e1-8004-ab58b154d040","Type":"ContainerStarted","Data":"d088e043e471890b6a275491afe219c59a2a25526036a54c1ee9ecfc6c447e0d"} Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.685906 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-d846c" event={"ID":"450881d0-31fd-49ec-b105-b4001cbccc16","Type":"ContainerDied","Data":"1cf88c0c2637caac3c1196e5518e55ccbd24d6ff337be35fbe569728d56a4f00"} Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.685932 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cf88c0c2637caac3c1196e5518e55ccbd24d6ff337be35fbe569728d56a4f00" Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.686991 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e2edd879-2e11-41b2-872a-1f50cf71719f","Type":"ContainerStarted","Data":"556d9303c2ecdb5ae4464e80e7cbcd6d286f3f767d3a21f5db809e8fc72e1b15"} Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.688170 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c6177d8c-3ae2-4aee-87ac-eefdc96806e6","Type":"ContainerStarted","Data":"601d99a35115b9b50260a9b9676dc9034ffb15a20a9de9a8a81218a81633ba4d"} Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.689581 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nmcpt" event={"ID":"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f","Type":"ContainerStarted","Data":"81f5b14ee6193920973e3da40dfaa8f7fc7be3f55eda6df96be87d9c15036375"} Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.690675 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec","Type":"ContainerStarted","Data":"9a399cf757f6373c201cf42f5942f5a56e784aa5efc65c9a9a78e8e44b8b46f5"} Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.815444 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d846c" Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.821759 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.947050 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/450881d0-31fd-49ec-b105-b4001cbccc16-config\") pod \"450881d0-31fd-49ec-b105-b4001cbccc16\" (UID: \"450881d0-31fd-49ec-b105-b4001cbccc16\") " Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.947784 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/450881d0-31fd-49ec-b105-b4001cbccc16-config" (OuterVolumeSpecName: "config") pod "450881d0-31fd-49ec-b105-b4001cbccc16" (UID: "450881d0-31fd-49ec-b105-b4001cbccc16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.948060 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca5112fd-b671-407b-b91e-0448e0355b53-dns-svc\") pod \"ca5112fd-b671-407b-b91e-0448e0355b53\" (UID: \"ca5112fd-b671-407b-b91e-0448e0355b53\") " Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.948160 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqv5d\" (UniqueName: \"kubernetes.io/projected/450881d0-31fd-49ec-b105-b4001cbccc16-kube-api-access-mqv5d\") pod \"450881d0-31fd-49ec-b105-b4001cbccc16\" (UID: \"450881d0-31fd-49ec-b105-b4001cbccc16\") " Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.948275 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrzfp\" (UniqueName: \"kubernetes.io/projected/ca5112fd-b671-407b-b91e-0448e0355b53-kube-api-access-xrzfp\") pod \"ca5112fd-b671-407b-b91e-0448e0355b53\" (UID: \"ca5112fd-b671-407b-b91e-0448e0355b53\") " Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.948358 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca5112fd-b671-407b-b91e-0448e0355b53-config\") pod \"ca5112fd-b671-407b-b91e-0448e0355b53\" (UID: \"ca5112fd-b671-407b-b91e-0448e0355b53\") " Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.948591 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca5112fd-b671-407b-b91e-0448e0355b53-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca5112fd-b671-407b-b91e-0448e0355b53" (UID: "ca5112fd-b671-407b-b91e-0448e0355b53"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.948837 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/450881d0-31fd-49ec-b105-b4001cbccc16-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.948858 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca5112fd-b671-407b-b91e-0448e0355b53-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.949155 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca5112fd-b671-407b-b91e-0448e0355b53-config" (OuterVolumeSpecName: "config") pod "ca5112fd-b671-407b-b91e-0448e0355b53" (UID: "ca5112fd-b671-407b-b91e-0448e0355b53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.954820 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca5112fd-b671-407b-b91e-0448e0355b53-kube-api-access-xrzfp" (OuterVolumeSpecName: "kube-api-access-xrzfp") pod "ca5112fd-b671-407b-b91e-0448e0355b53" (UID: "ca5112fd-b671-407b-b91e-0448e0355b53"). InnerVolumeSpecName "kube-api-access-xrzfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.954921 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/450881d0-31fd-49ec-b105-b4001cbccc16-kube-api-access-mqv5d" (OuterVolumeSpecName: "kube-api-access-mqv5d") pod "450881d0-31fd-49ec-b105-b4001cbccc16" (UID: "450881d0-31fd-49ec-b105-b4001cbccc16"). InnerVolumeSpecName "kube-api-access-mqv5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:59:57 crc kubenswrapper[4832]: I0131 04:59:57.959309 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 04:59:58 crc kubenswrapper[4832]: I0131 04:59:58.050876 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqv5d\" (UniqueName: \"kubernetes.io/projected/450881d0-31fd-49ec-b105-b4001cbccc16-kube-api-access-mqv5d\") on node \"crc\" DevicePath \"\"" Jan 31 04:59:58 crc kubenswrapper[4832]: I0131 04:59:58.050949 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrzfp\" (UniqueName: \"kubernetes.io/projected/ca5112fd-b671-407b-b91e-0448e0355b53-kube-api-access-xrzfp\") on node \"crc\" DevicePath \"\"" Jan 31 04:59:58 crc kubenswrapper[4832]: I0131 04:59:58.050975 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca5112fd-b671-407b-b91e-0448e0355b53-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:59:58 crc kubenswrapper[4832]: I0131 04:59:58.702505 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d846c" Jan 31 04:59:58 crc kubenswrapper[4832]: I0131 04:59:58.702625 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l2bnf" Jan 31 04:59:58 crc kubenswrapper[4832]: I0131 04:59:58.702511 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b05c379c-cf2f-4179-a902-475d2a555294","Type":"ContainerStarted","Data":"affcc6dd2d8da88c0eabe1a9995cb8fad7a28dd7a9b026403973a31219b27ced"} Jan 31 04:59:58 crc kubenswrapper[4832]: I0131 04:59:58.809393 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l2bnf"] Jan 31 04:59:58 crc kubenswrapper[4832]: I0131 04:59:58.812724 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l2bnf"] Jan 31 04:59:58 crc kubenswrapper[4832]: I0131 04:59:58.827275 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d846c"] Jan 31 04:59:58 crc kubenswrapper[4832]: I0131 04:59:58.833511 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d846c"] Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.183922 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-srkd5"] Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.188605 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.198298 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.205660 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-srkd5"] Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.276290 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gwbr\" (UniqueName: \"kubernetes.io/projected/d7e9680d-d2db-4c26-99be-f2e6331d64bf-kube-api-access-2gwbr\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.276358 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7e9680d-d2db-4c26-99be-f2e6331d64bf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.276399 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d7e9680d-d2db-4c26-99be-f2e6331d64bf-ovs-rundir\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.276469 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e9680d-d2db-4c26-99be-f2e6331d64bf-config\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.276540 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e9680d-d2db-4c26-99be-f2e6331d64bf-combined-ca-bundle\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.276605 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d7e9680d-d2db-4c26-99be-f2e6331d64bf-ovn-rundir\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.378421 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e9680d-d2db-4c26-99be-f2e6331d64bf-combined-ca-bundle\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.378501 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d7e9680d-d2db-4c26-99be-f2e6331d64bf-ovn-rundir\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.378594 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gwbr\" (UniqueName: \"kubernetes.io/projected/d7e9680d-d2db-4c26-99be-f2e6331d64bf-kube-api-access-2gwbr\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.378625 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7e9680d-d2db-4c26-99be-f2e6331d64bf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.378667 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d7e9680d-d2db-4c26-99be-f2e6331d64bf-ovs-rundir\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.378753 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e9680d-d2db-4c26-99be-f2e6331d64bf-config\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.379000 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d7e9680d-d2db-4c26-99be-f2e6331d64bf-ovn-rundir\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.379230 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d7e9680d-d2db-4c26-99be-f2e6331d64bf-ovs-rundir\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.379849 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e9680d-d2db-4c26-99be-f2e6331d64bf-config\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.385988 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7e9680d-d2db-4c26-99be-f2e6331d64bf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.387442 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e9680d-d2db-4c26-99be-f2e6331d64bf-combined-ca-bundle\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.399365 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gwbr\" (UniqueName: \"kubernetes.io/projected/d7e9680d-d2db-4c26-99be-f2e6331d64bf-kube-api-access-2gwbr\") pod \"ovn-controller-metrics-srkd5\" (UID: \"d7e9680d-d2db-4c26-99be-f2e6331d64bf\") " pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.470027 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4bfxq"] Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.521103 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-srkd5" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.572411 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-5wmcn"] Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.579304 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.582295 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.611230 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-5wmcn"] Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.687308 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-5wmcn\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.688201 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-5wmcn\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.688840 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-config\") pod \"dnsmasq-dns-6bc7876d45-5wmcn\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.688956 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmggn\" (UniqueName: \"kubernetes.io/projected/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-kube-api-access-zmggn\") pod \"dnsmasq-dns-6bc7876d45-5wmcn\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.732699 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dh7zz"] Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.753837 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-wvlhl"] Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.755586 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.764444 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9a976894-0f59-4fb5-a297-c43c1bf88b47","Type":"ContainerStarted","Data":"45e8607d87db8649c987ea7771c353626689748228529fc17140f44f17fa1ea2"} Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.765724 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.779991 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-wvlhl"] Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.787764 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e4e0df3a-5b8c-43ad-b404-5a9716f774a6","Type":"ContainerStarted","Data":"4f6ee4af033e0bef4d17d30f09f5bca28c73e1fe4562858c0fe0a7a7c45afa50"} Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.790633 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-config\") pod \"dnsmasq-dns-6bc7876d45-5wmcn\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.790693 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmggn\" (UniqueName: \"kubernetes.io/projected/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-kube-api-access-zmggn\") pod \"dnsmasq-dns-6bc7876d45-5wmcn\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.790732 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-wvlhl\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.790777 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-dns-svc\") pod \"dnsmasq-dns-8554648995-wvlhl\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.790884 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zcfr\" (UniqueName: \"kubernetes.io/projected/f93d869f-9e5c-4cb6-a66f-2930752e74dc-kube-api-access-8zcfr\") pod \"dnsmasq-dns-8554648995-wvlhl\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.790921 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-wvlhl\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.790955 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-5wmcn\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.790988 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-5wmcn\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.791009 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-config\") pod \"dnsmasq-dns-8554648995-wvlhl\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.792911 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-5wmcn\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.793078 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-config\") pod \"dnsmasq-dns-6bc7876d45-5wmcn\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.793487 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-5wmcn\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.819503 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmggn\" (UniqueName: \"kubernetes.io/projected/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-kube-api-access-zmggn\") pod \"dnsmasq-dns-6bc7876d45-5wmcn\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.894768 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zcfr\" (UniqueName: \"kubernetes.io/projected/f93d869f-9e5c-4cb6-a66f-2930752e74dc-kube-api-access-8zcfr\") pod \"dnsmasq-dns-8554648995-wvlhl\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.894828 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-wvlhl\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.894875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-config\") pod \"dnsmasq-dns-8554648995-wvlhl\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.894932 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-wvlhl\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.894962 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-dns-svc\") pod \"dnsmasq-dns-8554648995-wvlhl\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.895900 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-dns-svc\") pod \"dnsmasq-dns-8554648995-wvlhl\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.896230 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-config\") pod \"dnsmasq-dns-8554648995-wvlhl\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.896513 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-wvlhl\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.896922 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-wvlhl\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.916992 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zcfr\" (UniqueName: \"kubernetes.io/projected/f93d869f-9e5c-4cb6-a66f-2930752e74dc-kube-api-access-8zcfr\") pod \"dnsmasq-dns-8554648995-wvlhl\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.923358 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="450881d0-31fd-49ec-b105-b4001cbccc16" path="/var/lib/kubelet/pods/450881d0-31fd-49ec-b105-b4001cbccc16/volumes" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.923937 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca5112fd-b671-407b-b91e-0448e0355b53" path="/var/lib/kubelet/pods/ca5112fd-b671-407b-b91e-0448e0355b53/volumes" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.933890 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 04:59:59 crc kubenswrapper[4832]: I0131 04:59:59.966928 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.092227 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.099635 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52c1e5bd-18f2-4115-9ac3-d51a7513272e-dns-svc\") pod \"52c1e5bd-18f2-4115-9ac3-d51a7513272e\" (UID: \"52c1e5bd-18f2-4115-9ac3-d51a7513272e\") " Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.099693 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c1e5bd-18f2-4115-9ac3-d51a7513272e-config\") pod \"52c1e5bd-18f2-4115-9ac3-d51a7513272e\" (UID: \"52c1e5bd-18f2-4115-9ac3-d51a7513272e\") " Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.099815 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpshx\" (UniqueName: \"kubernetes.io/projected/52c1e5bd-18f2-4115-9ac3-d51a7513272e-kube-api-access-tpshx\") pod \"52c1e5bd-18f2-4115-9ac3-d51a7513272e\" (UID: \"52c1e5bd-18f2-4115-9ac3-d51a7513272e\") " Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.100908 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52c1e5bd-18f2-4115-9ac3-d51a7513272e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52c1e5bd-18f2-4115-9ac3-d51a7513272e" (UID: "52c1e5bd-18f2-4115-9ac3-d51a7513272e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.101305 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52c1e5bd-18f2-4115-9ac3-d51a7513272e-config" (OuterVolumeSpecName: "config") pod "52c1e5bd-18f2-4115-9ac3-d51a7513272e" (UID: "52c1e5bd-18f2-4115-9ac3-d51a7513272e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.101743 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52c1e5bd-18f2-4115-9ac3-d51a7513272e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.102171 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c1e5bd-18f2-4115-9ac3-d51a7513272e-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.106644 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52c1e5bd-18f2-4115-9ac3-d51a7513272e-kube-api-access-tpshx" (OuterVolumeSpecName: "kube-api-access-tpshx") pod "52c1e5bd-18f2-4115-9ac3-d51a7513272e" (UID: "52c1e5bd-18f2-4115-9ac3-d51a7513272e"). InnerVolumeSpecName "kube-api-access-tpshx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.175816 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7"] Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.177269 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.182193 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.182478 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.194043 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7"] Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.203665 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpshx\" (UniqueName: \"kubernetes.io/projected/52c1e5bd-18f2-4115-9ac3-d51a7513272e-kube-api-access-tpshx\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.252793 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-srkd5"] Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.305403 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4thkp\" (UniqueName: \"kubernetes.io/projected/487160a9-724e-4892-a8e6-886547709572-kube-api-access-4thkp\") pod \"collect-profiles-29497260-rjsp7\" (UID: \"487160a9-724e-4892-a8e6-886547709572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.305473 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/487160a9-724e-4892-a8e6-886547709572-secret-volume\") pod \"collect-profiles-29497260-rjsp7\" (UID: \"487160a9-724e-4892-a8e6-886547709572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.305505 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/487160a9-724e-4892-a8e6-886547709572-config-volume\") pod \"collect-profiles-29497260-rjsp7\" (UID: \"487160a9-724e-4892-a8e6-886547709572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.407288 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/487160a9-724e-4892-a8e6-886547709572-config-volume\") pod \"collect-profiles-29497260-rjsp7\" (UID: \"487160a9-724e-4892-a8e6-886547709572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.407412 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4thkp\" (UniqueName: \"kubernetes.io/projected/487160a9-724e-4892-a8e6-886547709572-kube-api-access-4thkp\") pod \"collect-profiles-29497260-rjsp7\" (UID: \"487160a9-724e-4892-a8e6-886547709572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.407462 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/487160a9-724e-4892-a8e6-886547709572-secret-volume\") pod \"collect-profiles-29497260-rjsp7\" (UID: \"487160a9-724e-4892-a8e6-886547709572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.408427 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/487160a9-724e-4892-a8e6-886547709572-config-volume\") pod \"collect-profiles-29497260-rjsp7\" (UID: \"487160a9-724e-4892-a8e6-886547709572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.412455 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/487160a9-724e-4892-a8e6-886547709572-secret-volume\") pod \"collect-profiles-29497260-rjsp7\" (UID: \"487160a9-724e-4892-a8e6-886547709572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.429847 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4thkp\" (UniqueName: \"kubernetes.io/projected/487160a9-724e-4892-a8e6-886547709572-kube-api-access-4thkp\") pod \"collect-profiles-29497260-rjsp7\" (UID: \"487160a9-724e-4892-a8e6-886547709572\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.492526 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-5wmcn"] Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.502347 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" Jan 31 05:00:00 crc kubenswrapper[4832]: W0131 05:00:00.552753 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab8bc39d_fca0_40e4_8077_fb26c48b89e2.slice/crio-118c96f24f028d557d92642f9fe3594066cb5ed1971e3d529855965c651b8bec WatchSource:0}: Error finding container 118c96f24f028d557d92642f9fe3594066cb5ed1971e3d529855965c651b8bec: Status 404 returned error can't find the container with id 118c96f24f028d557d92642f9fe3594066cb5ed1971e3d529855965c651b8bec Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.622820 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.713054 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8f429c0-20b3-4046-aa0c-946465b934d1-dns-svc\") pod \"a8f429c0-20b3-4046-aa0c-946465b934d1\" (UID: \"a8f429c0-20b3-4046-aa0c-946465b934d1\") " Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.713618 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8f429c0-20b3-4046-aa0c-946465b934d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a8f429c0-20b3-4046-aa0c-946465b934d1" (UID: "a8f429c0-20b3-4046-aa0c-946465b934d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.713829 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh58m\" (UniqueName: \"kubernetes.io/projected/a8f429c0-20b3-4046-aa0c-946465b934d1-kube-api-access-zh58m\") pod \"a8f429c0-20b3-4046-aa0c-946465b934d1\" (UID: \"a8f429c0-20b3-4046-aa0c-946465b934d1\") " Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.713986 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f429c0-20b3-4046-aa0c-946465b934d1-config\") pod \"a8f429c0-20b3-4046-aa0c-946465b934d1\" (UID: \"a8f429c0-20b3-4046-aa0c-946465b934d1\") " Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.714492 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8f429c0-20b3-4046-aa0c-946465b934d1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.715228 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8f429c0-20b3-4046-aa0c-946465b934d1-config" (OuterVolumeSpecName: "config") pod "a8f429c0-20b3-4046-aa0c-946465b934d1" (UID: "a8f429c0-20b3-4046-aa0c-946465b934d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.719630 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f429c0-20b3-4046-aa0c-946465b934d1-kube-api-access-zh58m" (OuterVolumeSpecName: "kube-api-access-zh58m") pod "a8f429c0-20b3-4046-aa0c-946465b934d1" (UID: "a8f429c0-20b3-4046-aa0c-946465b934d1"). InnerVolumeSpecName "kube-api-access-zh58m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.804594 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" event={"ID":"52c1e5bd-18f2-4115-9ac3-d51a7513272e","Type":"ContainerDied","Data":"10930aa85ace25b0d9ff0362aae5c0656cee0414f49fec0878290cf85fb2c1a4"} Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.804615 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4bfxq" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.806617 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" event={"ID":"a8f429c0-20b3-4046-aa0c-946465b934d1","Type":"ContainerDied","Data":"c7688c2a0b09c2d959b847ec4769214679ccbcc68f1df38cadd6e7629262dff2"} Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.806696 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dh7zz" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.809438 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" event={"ID":"ab8bc39d-fca0-40e4-8077-fb26c48b89e2","Type":"ContainerStarted","Data":"118c96f24f028d557d92642f9fe3594066cb5ed1971e3d529855965c651b8bec"} Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.811424 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-srkd5" event={"ID":"d7e9680d-d2db-4c26-99be-f2e6331d64bf","Type":"ContainerStarted","Data":"976142a0c27b8892a0772b7053aacd96d736e911458fb1c26d923de338a21c3a"} Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.816070 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh58m\" (UniqueName: \"kubernetes.io/projected/a8f429c0-20b3-4046-aa0c-946465b934d1-kube-api-access-zh58m\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.816094 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8f429c0-20b3-4046-aa0c-946465b934d1-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.878081 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dh7zz"] Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.892546 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dh7zz"] Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.925972 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4bfxq"] Jan 31 05:00:00 crc kubenswrapper[4832]: I0131 05:00:00.932483 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4bfxq"] Jan 31 05:00:01 crc kubenswrapper[4832]: I0131 05:00:01.582227 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-wvlhl"] Jan 31 05:00:01 crc kubenswrapper[4832]: I0131 05:00:01.817642 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7"] Jan 31 05:00:01 crc kubenswrapper[4832]: I0131 05:00:01.872127 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52c1e5bd-18f2-4115-9ac3-d51a7513272e" path="/var/lib/kubelet/pods/52c1e5bd-18f2-4115-9ac3-d51a7513272e/volumes" Jan 31 05:00:01 crc kubenswrapper[4832]: I0131 05:00:01.873146 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8f429c0-20b3-4046-aa0c-946465b934d1" path="/var/lib/kubelet/pods/a8f429c0-20b3-4046-aa0c-946465b934d1/volumes" Jan 31 05:00:03 crc kubenswrapper[4832]: I0131 05:00:03.848058 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" event={"ID":"487160a9-724e-4892-a8e6-886547709572","Type":"ContainerStarted","Data":"5050fc9508d1662a638b588dc05e865586d33162592a490cbd9f6bb461fec559"} Jan 31 05:00:04 crc kubenswrapper[4832]: W0131 05:00:04.362712 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf93d869f_9e5c_4cb6_a66f_2930752e74dc.slice/crio-26de74dd8571140c11f0d0bd70367d1d84ba242df682fd914b576ea34f0d4267 WatchSource:0}: Error finding container 26de74dd8571140c11f0d0bd70367d1d84ba242df682fd914b576ea34f0d4267: Status 404 returned error can't find the container with id 26de74dd8571140c11f0d0bd70367d1d84ba242df682fd914b576ea34f0d4267 Jan 31 05:00:04 crc kubenswrapper[4832]: I0131 05:00:04.860216 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-wvlhl" event={"ID":"f93d869f-9e5c-4cb6-a66f-2930752e74dc","Type":"ContainerStarted","Data":"26de74dd8571140c11f0d0bd70367d1d84ba242df682fd914b576ea34f0d4267"} Jan 31 05:00:08 crc kubenswrapper[4832]: I0131 05:00:08.915276 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"86b5161e-fa9c-4b0d-9549-2ab191b90e33","Type":"ContainerStarted","Data":"318b427305a48caf0b75abfcef5f9e8c140051feef5b981b69ed7f81806d7111"} Jan 31 05:00:08 crc kubenswrapper[4832]: I0131 05:00:08.916171 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 31 05:00:08 crc kubenswrapper[4832]: I0131 05:00:08.942930 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=18.360717451 podStartE2EDuration="28.942902999s" podCreationTimestamp="2026-01-31 04:59:40 +0000 UTC" firstStartedPulling="2026-01-31 04:59:57.143742846 +0000 UTC m=+1006.092564531" lastFinishedPulling="2026-01-31 05:00:07.725928394 +0000 UTC m=+1016.674750079" observedRunningTime="2026-01-31 05:00:08.939523244 +0000 UTC m=+1017.888344929" watchObservedRunningTime="2026-01-31 05:00:08.942902999 +0000 UTC m=+1017.891724684" Jan 31 05:00:08 crc kubenswrapper[4832]: I0131 05:00:08.948863 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" event={"ID":"487160a9-724e-4892-a8e6-886547709572","Type":"ContainerStarted","Data":"0b3f23d1b3d272baabb0ae3c3267521a9f733dae53e3a534cc0cfb7c69598ea4"} Jan 31 05:00:08 crc kubenswrapper[4832]: I0131 05:00:08.971860 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" podStartSLOduration=8.971838229 podStartE2EDuration="8.971838229s" podCreationTimestamp="2026-01-31 05:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:00:08.97028217 +0000 UTC m=+1017.919103855" watchObservedRunningTime="2026-01-31 05:00:08.971838229 +0000 UTC m=+1017.920659914" Jan 31 05:00:09 crc kubenswrapper[4832]: I0131 05:00:09.962838 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b05c379c-cf2f-4179-a902-475d2a555294","Type":"ContainerStarted","Data":"d77ef1f7a49b8baf417d69fccfddf8ebcb03c69b466e721e940759d4b6e3c2cf"} Jan 31 05:00:09 crc kubenswrapper[4832]: I0131 05:00:09.963421 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b05c379c-cf2f-4179-a902-475d2a555294","Type":"ContainerStarted","Data":"2bda3b33af2d928758958c486cca854552e5e0c26ee4051e1e15c4875bb8d019"} Jan 31 05:00:09 crc kubenswrapper[4832]: I0131 05:00:09.965716 4832 generic.go:334] "Generic (PLEG): container finished" podID="ab8bc39d-fca0-40e4-8077-fb26c48b89e2" containerID="ad203be4becd92c344ee1bad6dee987849ae18c08bc4929c08d43f3eacabd16e" exitCode=0 Jan 31 05:00:09 crc kubenswrapper[4832]: I0131 05:00:09.966663 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" event={"ID":"ab8bc39d-fca0-40e4-8077-fb26c48b89e2","Type":"ContainerDied","Data":"ad203be4becd92c344ee1bad6dee987849ae18c08bc4929c08d43f3eacabd16e"} Jan 31 05:00:09 crc kubenswrapper[4832]: I0131 05:00:09.969726 4832 generic.go:334] "Generic (PLEG): container finished" podID="f93d869f-9e5c-4cb6-a66f-2930752e74dc" containerID="80dabdef5a2ad3f7d26e56c475a16a2ec4cda49de773ec43b0a12c3487521a0c" exitCode=0 Jan 31 05:00:09 crc kubenswrapper[4832]: I0131 05:00:09.969854 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-wvlhl" event={"ID":"f93d869f-9e5c-4cb6-a66f-2930752e74dc","Type":"ContainerDied","Data":"80dabdef5a2ad3f7d26e56c475a16a2ec4cda49de773ec43b0a12c3487521a0c"} Jan 31 05:00:09 crc kubenswrapper[4832]: I0131 05:00:09.984154 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-srkd5" event={"ID":"d7e9680d-d2db-4c26-99be-f2e6331d64bf","Type":"ContainerStarted","Data":"24cf92e03515557a2b1ebf0d78482596d02d8da9f242fd1835f52d5099c993e5"} Jan 31 05:00:09 crc kubenswrapper[4832]: I0131 05:00:09.990721 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec","Type":"ContainerStarted","Data":"e04de9172eac78760acf20678c046681a569e3528f0570b202010a3963eef417"} Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.014842 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=14.756411222 podStartE2EDuration="25.014807522s" podCreationTimestamp="2026-01-31 04:59:45 +0000 UTC" firstStartedPulling="2026-01-31 04:59:57.899219079 +0000 UTC m=+1006.848040774" lastFinishedPulling="2026-01-31 05:00:08.157615389 +0000 UTC m=+1017.106437074" observedRunningTime="2026-01-31 05:00:09.998505975 +0000 UTC m=+1018.947327690" watchObservedRunningTime="2026-01-31 05:00:10.014807522 +0000 UTC m=+1018.963629227" Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.019932 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ce8cc6a9-9cd5-410b-93df-1f132291a0ea","Type":"ContainerStarted","Data":"67dee97a6b47d42e297154f13cf5cebf76f0789befce30e0be6ccd99eab46874"} Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.020371 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.029410 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8sq59" event={"ID":"103522f1-37d5-48e1-8004-ab58b154d040","Type":"ContainerStarted","Data":"86e2f3ed225cbdd97ed15f145a9205452ff3fe9f9e1bb9048ebd6569f90b2f2c"} Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.029506 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-8sq59" Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.033491 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e2edd879-2e11-41b2-872a-1f50cf71719f","Type":"ContainerStarted","Data":"bbc0c4882cbd55782218e11564f43a3991821703828d697ecd9746f3c3c4dd47"} Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.033705 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e2edd879-2e11-41b2-872a-1f50cf71719f","Type":"ContainerStarted","Data":"ff519f9dd982880c01aec1e274da760165c99a6ad069ed9ba49cf622997733eb"} Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.035681 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c6177d8c-3ae2-4aee-87ac-eefdc96806e6","Type":"ContainerStarted","Data":"d87bccb10c205d9f11e510e4b9d86ad7f1e7896442ed3d8d1c2ff97bd162ca6e"} Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.038703 4832 generic.go:334] "Generic (PLEG): container finished" podID="61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f" containerID="febf0a32641fe2ce2b389dfaa7d9a02a0faf981f17d2ce18245e03970c818d71" exitCode=0 Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.038781 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nmcpt" event={"ID":"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f","Type":"ContainerDied","Data":"febf0a32641fe2ce2b389dfaa7d9a02a0faf981f17d2ce18245e03970c818d71"} Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.057779 4832 generic.go:334] "Generic (PLEG): container finished" podID="487160a9-724e-4892-a8e6-886547709572" containerID="0b3f23d1b3d272baabb0ae3c3267521a9f733dae53e3a534cc0cfb7c69598ea4" exitCode=0 Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.058810 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" event={"ID":"487160a9-724e-4892-a8e6-886547709572","Type":"ContainerDied","Data":"0b3f23d1b3d272baabb0ae3c3267521a9f733dae53e3a534cc0cfb7c69598ea4"} Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.130799 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-srkd5" podStartSLOduration=3.524234225 podStartE2EDuration="11.130770489s" podCreationTimestamp="2026-01-31 04:59:59 +0000 UTC" firstStartedPulling="2026-01-31 05:00:00.554166041 +0000 UTC m=+1009.502987736" lastFinishedPulling="2026-01-31 05:00:08.160702315 +0000 UTC m=+1017.109524000" observedRunningTime="2026-01-31 05:00:10.115017509 +0000 UTC m=+1019.063839194" watchObservedRunningTime="2026-01-31 05:00:10.130770489 +0000 UTC m=+1019.079592174" Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.139880 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.246653 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=15.665085398 podStartE2EDuration="28.246600761s" podCreationTimestamp="2026-01-31 04:59:42 +0000 UTC" firstStartedPulling="2026-01-31 04:59:56.197356495 +0000 UTC m=+1005.146178180" lastFinishedPulling="2026-01-31 05:00:08.778871838 +0000 UTC m=+1017.727693543" observedRunningTime="2026-01-31 05:00:10.219838349 +0000 UTC m=+1019.168660034" watchObservedRunningTime="2026-01-31 05:00:10.246600761 +0000 UTC m=+1019.195422456" Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.284989 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-8sq59" podStartSLOduration=12.62051855 podStartE2EDuration="24.284961884s" podCreationTimestamp="2026-01-31 04:59:46 +0000 UTC" firstStartedPulling="2026-01-31 04:59:56.947468412 +0000 UTC m=+1005.896290097" lastFinishedPulling="2026-01-31 05:00:08.611911746 +0000 UTC m=+1017.560733431" observedRunningTime="2026-01-31 05:00:10.241615626 +0000 UTC m=+1019.190437311" watchObservedRunningTime="2026-01-31 05:00:10.284961884 +0000 UTC m=+1019.233783569" Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.313956 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.046655642 podStartE2EDuration="22.313925555s" podCreationTimestamp="2026-01-31 04:59:48 +0000 UTC" firstStartedPulling="2026-01-31 04:59:57.333744294 +0000 UTC m=+1006.282565979" lastFinishedPulling="2026-01-31 05:00:08.601014197 +0000 UTC m=+1017.549835892" observedRunningTime="2026-01-31 05:00:10.276858592 +0000 UTC m=+1019.225680277" watchObservedRunningTime="2026-01-31 05:00:10.313925555 +0000 UTC m=+1019.262747240" Jan 31 05:00:10 crc kubenswrapper[4832]: I0131 05:00:10.604430 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.069850 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-wvlhl" event={"ID":"f93d869f-9e5c-4cb6-a66f-2930752e74dc","Type":"ContainerStarted","Data":"f1859b3066d40cfbfa77dd9f4a222fee767636768481d00adcfb191bdf7d7a69"} Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.070060 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.072711 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" event={"ID":"ab8bc39d-fca0-40e4-8077-fb26c48b89e2","Type":"ContainerStarted","Data":"9036526351d448694fb2ff1a8e2ce53fd5ecbe3f07414644e9004fe5050496a9"} Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.073941 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.078304 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nmcpt" event={"ID":"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f","Type":"ContainerStarted","Data":"d85c3d093e905500fad5884cec72151195239362234841a0cdb191bbefe89c4d"} Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.078443 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.078525 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nmcpt" event={"ID":"61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f","Type":"ContainerStarted","Data":"ed3c5a56e1ad0141a54abcaccdd6946d2ce358f2e68c5cf24595a42a8c27a992"} Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.081545 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.106946 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-wvlhl" podStartSLOduration=7.876962812 podStartE2EDuration="12.106921034s" podCreationTimestamp="2026-01-31 04:59:59 +0000 UTC" firstStartedPulling="2026-01-31 05:00:04.386532627 +0000 UTC m=+1013.335354312" lastFinishedPulling="2026-01-31 05:00:08.616490849 +0000 UTC m=+1017.565312534" observedRunningTime="2026-01-31 05:00:11.10516983 +0000 UTC m=+1020.053991535" watchObservedRunningTime="2026-01-31 05:00:11.106921034 +0000 UTC m=+1020.055742719" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.136175 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.136483 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-nmcpt" podStartSLOduration=14.652671385 podStartE2EDuration="25.136431743s" podCreationTimestamp="2026-01-31 04:59:46 +0000 UTC" firstStartedPulling="2026-01-31 04:59:57.242436065 +0000 UTC m=+1006.191257750" lastFinishedPulling="2026-01-31 05:00:07.726196383 +0000 UTC m=+1016.675018108" observedRunningTime="2026-01-31 05:00:11.13378021 +0000 UTC m=+1020.082601905" watchObservedRunningTime="2026-01-31 05:00:11.136431743 +0000 UTC m=+1020.085253428" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.170497 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" podStartSLOduration=4.576153376 podStartE2EDuration="12.170469441s" podCreationTimestamp="2026-01-31 04:59:59 +0000 UTC" firstStartedPulling="2026-01-31 05:00:00.562081006 +0000 UTC m=+1009.510902691" lastFinishedPulling="2026-01-31 05:00:08.156397071 +0000 UTC m=+1017.105218756" observedRunningTime="2026-01-31 05:00:11.166342782 +0000 UTC m=+1020.115164477" watchObservedRunningTime="2026-01-31 05:00:11.170469441 +0000 UTC m=+1020.119291126" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.450428 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.553456 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/487160a9-724e-4892-a8e6-886547709572-config-volume\") pod \"487160a9-724e-4892-a8e6-886547709572\" (UID: \"487160a9-724e-4892-a8e6-886547709572\") " Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.553803 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/487160a9-724e-4892-a8e6-886547709572-secret-volume\") pod \"487160a9-724e-4892-a8e6-886547709572\" (UID: \"487160a9-724e-4892-a8e6-886547709572\") " Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.553942 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4thkp\" (UniqueName: \"kubernetes.io/projected/487160a9-724e-4892-a8e6-886547709572-kube-api-access-4thkp\") pod \"487160a9-724e-4892-a8e6-886547709572\" (UID: \"487160a9-724e-4892-a8e6-886547709572\") " Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.554392 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/487160a9-724e-4892-a8e6-886547709572-config-volume" (OuterVolumeSpecName: "config-volume") pod "487160a9-724e-4892-a8e6-886547709572" (UID: "487160a9-724e-4892-a8e6-886547709572"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.562021 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/487160a9-724e-4892-a8e6-886547709572-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "487160a9-724e-4892-a8e6-886547709572" (UID: "487160a9-724e-4892-a8e6-886547709572"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.562321 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/487160a9-724e-4892-a8e6-886547709572-kube-api-access-4thkp" (OuterVolumeSpecName: "kube-api-access-4thkp") pod "487160a9-724e-4892-a8e6-886547709572" (UID: "487160a9-724e-4892-a8e6-886547709572"). InnerVolumeSpecName "kube-api-access-4thkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.604275 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.657091 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/487160a9-724e-4892-a8e6-886547709572-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.657146 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/487160a9-724e-4892-a8e6-886547709572-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:11 crc kubenswrapper[4832]: I0131 05:00:11.657169 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4thkp\" (UniqueName: \"kubernetes.io/projected/487160a9-724e-4892-a8e6-886547709572-kube-api-access-4thkp\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:12 crc kubenswrapper[4832]: I0131 05:00:12.086621 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" Jan 31 05:00:12 crc kubenswrapper[4832]: I0131 05:00:12.086649 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7" event={"ID":"487160a9-724e-4892-a8e6-886547709572","Type":"ContainerDied","Data":"5050fc9508d1662a638b588dc05e865586d33162592a490cbd9f6bb461fec559"} Jan 31 05:00:12 crc kubenswrapper[4832]: I0131 05:00:12.086704 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5050fc9508d1662a638b588dc05e865586d33162592a490cbd9f6bb461fec559" Jan 31 05:00:13 crc kubenswrapper[4832]: I0131 05:00:13.101714 4832 generic.go:334] "Generic (PLEG): container finished" podID="c6177d8c-3ae2-4aee-87ac-eefdc96806e6" containerID="d87bccb10c205d9f11e510e4b9d86ad7f1e7896442ed3d8d1c2ff97bd162ca6e" exitCode=0 Jan 31 05:00:13 crc kubenswrapper[4832]: I0131 05:00:13.102059 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c6177d8c-3ae2-4aee-87ac-eefdc96806e6","Type":"ContainerDied","Data":"d87bccb10c205d9f11e510e4b9d86ad7f1e7896442ed3d8d1c2ff97bd162ca6e"} Jan 31 05:00:13 crc kubenswrapper[4832]: I0131 05:00:13.683218 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.117967 4832 generic.go:334] "Generic (PLEG): container finished" podID="b9bfe69c-78b0-4982-b9ab-7aa41bd071ec" containerID="e04de9172eac78760acf20678c046681a569e3528f0570b202010a3963eef417" exitCode=0 Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.118081 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec","Type":"ContainerDied","Data":"e04de9172eac78760acf20678c046681a569e3528f0570b202010a3963eef417"} Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.122693 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c6177d8c-3ae2-4aee-87ac-eefdc96806e6","Type":"ContainerStarted","Data":"46c4fc745842acf138e0b23a2f46f549d26941ba0e873ac157032f4d3bbb4707"} Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.195965 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.200359 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.088490761 podStartE2EDuration="35.200323301s" podCreationTimestamp="2026-01-31 04:59:39 +0000 UTC" firstStartedPulling="2026-01-31 04:59:57.050038711 +0000 UTC m=+1005.998860396" lastFinishedPulling="2026-01-31 05:00:08.161871211 +0000 UTC m=+1017.110692936" observedRunningTime="2026-01-31 05:00:14.195611455 +0000 UTC m=+1023.144433150" watchObservedRunningTime="2026-01-31 05:00:14.200323301 +0000 UTC m=+1023.149145026" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.214600 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.279549 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.657997 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 31 05:00:14 crc kubenswrapper[4832]: E0131 05:00:14.658455 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487160a9-724e-4892-a8e6-886547709572" containerName="collect-profiles" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.658472 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="487160a9-724e-4892-a8e6-886547709572" containerName="collect-profiles" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.658675 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="487160a9-724e-4892-a8e6-886547709572" containerName="collect-profiles" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.659641 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.666063 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.666131 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.669417 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.677653 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-lxrr6" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.678521 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.832681 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.832745 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.832876 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-config\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.832895 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srmsv\" (UniqueName: \"kubernetes.io/projected/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-kube-api-access-srmsv\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.832919 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-scripts\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.832990 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.833018 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.939971 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.940820 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.941789 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.942111 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-config\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.942855 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srmsv\" (UniqueName: \"kubernetes.io/projected/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-kube-api-access-srmsv\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.942994 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-scripts\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.943180 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.943397 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.944503 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-config\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.945701 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-scripts\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.953974 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.954201 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.954542 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.968970 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srmsv\" (UniqueName: \"kubernetes.io/projected/3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8-kube-api-access-srmsv\") pod \"ovn-northd-0\" (UID: \"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8\") " pod="openstack/ovn-northd-0" Jan 31 05:00:14 crc kubenswrapper[4832]: I0131 05:00:14.998620 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.103189 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.160470 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b9bfe69c-78b0-4982-b9ab-7aa41bd071ec","Type":"ContainerStarted","Data":"4746c0fab965694cd78c35ec18bab3b5ecc618b446dde32f0de2fbececbff912"} Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.213647 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-5wmcn"] Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.214013 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" podUID="ab8bc39d-fca0-40e4-8077-fb26c48b89e2" containerName="dnsmasq-dns" containerID="cri-o://9036526351d448694fb2ff1a8e2ce53fd5ecbe3f07414644e9004fe5050496a9" gracePeriod=10 Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.221079 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.227373 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.299140292 podStartE2EDuration="38.227348619s" podCreationTimestamp="2026-01-31 04:59:37 +0000 UTC" firstStartedPulling="2026-01-31 04:59:56.672118339 +0000 UTC m=+1005.620940024" lastFinishedPulling="2026-01-31 05:00:08.600326676 +0000 UTC m=+1017.549148351" observedRunningTime="2026-01-31 05:00:15.204295903 +0000 UTC m=+1024.153117828" watchObservedRunningTime="2026-01-31 05:00:15.227348619 +0000 UTC m=+1024.176170304" Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.299676 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.696180 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.804113 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.871944 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-dns-svc\") pod \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.872037 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-config\") pod \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.872173 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-ovsdbserver-sb\") pod \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.872278 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmggn\" (UniqueName: \"kubernetes.io/projected/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-kube-api-access-zmggn\") pod \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\" (UID: \"ab8bc39d-fca0-40e4-8077-fb26c48b89e2\") " Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.879645 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-kube-api-access-zmggn" (OuterVolumeSpecName: "kube-api-access-zmggn") pod "ab8bc39d-fca0-40e4-8077-fb26c48b89e2" (UID: "ab8bc39d-fca0-40e4-8077-fb26c48b89e2"). InnerVolumeSpecName "kube-api-access-zmggn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.921211 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab8bc39d-fca0-40e4-8077-fb26c48b89e2" (UID: "ab8bc39d-fca0-40e4-8077-fb26c48b89e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.921210 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ab8bc39d-fca0-40e4-8077-fb26c48b89e2" (UID: "ab8bc39d-fca0-40e4-8077-fb26c48b89e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.929526 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-config" (OuterVolumeSpecName: "config") pod "ab8bc39d-fca0-40e4-8077-fb26c48b89e2" (UID: "ab8bc39d-fca0-40e4-8077-fb26c48b89e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.974482 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.974525 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.974536 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:15 crc kubenswrapper[4832]: I0131 05:00:15.974549 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmggn\" (UniqueName: \"kubernetes.io/projected/ab8bc39d-fca0-40e4-8077-fb26c48b89e2-kube-api-access-zmggn\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:16 crc kubenswrapper[4832]: I0131 05:00:16.167666 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8","Type":"ContainerStarted","Data":"59340539b061cdb98d8a8e5577a9788bfea01434097e585af4dc8b398a8e3319"} Jan 31 05:00:16 crc kubenswrapper[4832]: I0131 05:00:16.171130 4832 generic.go:334] "Generic (PLEG): container finished" podID="ab8bc39d-fca0-40e4-8077-fb26c48b89e2" containerID="9036526351d448694fb2ff1a8e2ce53fd5ecbe3f07414644e9004fe5050496a9" exitCode=0 Jan 31 05:00:16 crc kubenswrapper[4832]: I0131 05:00:16.171191 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" event={"ID":"ab8bc39d-fca0-40e4-8077-fb26c48b89e2","Type":"ContainerDied","Data":"9036526351d448694fb2ff1a8e2ce53fd5ecbe3f07414644e9004fe5050496a9"} Jan 31 05:00:16 crc kubenswrapper[4832]: I0131 05:00:16.171231 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" event={"ID":"ab8bc39d-fca0-40e4-8077-fb26c48b89e2","Type":"ContainerDied","Data":"118c96f24f028d557d92642f9fe3594066cb5ed1971e3d529855965c651b8bec"} Jan 31 05:00:16 crc kubenswrapper[4832]: I0131 05:00:16.171263 4832 scope.go:117] "RemoveContainer" containerID="9036526351d448694fb2ff1a8e2ce53fd5ecbe3f07414644e9004fe5050496a9" Jan 31 05:00:16 crc kubenswrapper[4832]: I0131 05:00:16.171636 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-5wmcn" Jan 31 05:00:16 crc kubenswrapper[4832]: I0131 05:00:16.215288 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-5wmcn"] Jan 31 05:00:16 crc kubenswrapper[4832]: I0131 05:00:16.221810 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-5wmcn"] Jan 31 05:00:16 crc kubenswrapper[4832]: I0131 05:00:16.390876 4832 scope.go:117] "RemoveContainer" containerID="ad203be4becd92c344ee1bad6dee987849ae18c08bc4929c08d43f3eacabd16e" Jan 31 05:00:16 crc kubenswrapper[4832]: I0131 05:00:16.444900 4832 scope.go:117] "RemoveContainer" containerID="9036526351d448694fb2ff1a8e2ce53fd5ecbe3f07414644e9004fe5050496a9" Jan 31 05:00:16 crc kubenswrapper[4832]: E0131 05:00:16.445754 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9036526351d448694fb2ff1a8e2ce53fd5ecbe3f07414644e9004fe5050496a9\": container with ID starting with 9036526351d448694fb2ff1a8e2ce53fd5ecbe3f07414644e9004fe5050496a9 not found: ID does not exist" containerID="9036526351d448694fb2ff1a8e2ce53fd5ecbe3f07414644e9004fe5050496a9" Jan 31 05:00:16 crc kubenswrapper[4832]: I0131 05:00:16.445821 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9036526351d448694fb2ff1a8e2ce53fd5ecbe3f07414644e9004fe5050496a9"} err="failed to get container status \"9036526351d448694fb2ff1a8e2ce53fd5ecbe3f07414644e9004fe5050496a9\": rpc error: code = NotFound desc = could not find container \"9036526351d448694fb2ff1a8e2ce53fd5ecbe3f07414644e9004fe5050496a9\": container with ID starting with 9036526351d448694fb2ff1a8e2ce53fd5ecbe3f07414644e9004fe5050496a9 not found: ID does not exist" Jan 31 05:00:16 crc kubenswrapper[4832]: I0131 05:00:16.445917 4832 scope.go:117] "RemoveContainer" containerID="ad203be4becd92c344ee1bad6dee987849ae18c08bc4929c08d43f3eacabd16e" Jan 31 05:00:16 crc kubenswrapper[4832]: E0131 05:00:16.446507 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad203be4becd92c344ee1bad6dee987849ae18c08bc4929c08d43f3eacabd16e\": container with ID starting with ad203be4becd92c344ee1bad6dee987849ae18c08bc4929c08d43f3eacabd16e not found: ID does not exist" containerID="ad203be4becd92c344ee1bad6dee987849ae18c08bc4929c08d43f3eacabd16e" Jan 31 05:00:16 crc kubenswrapper[4832]: I0131 05:00:16.446547 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad203be4becd92c344ee1bad6dee987849ae18c08bc4929c08d43f3eacabd16e"} err="failed to get container status \"ad203be4becd92c344ee1bad6dee987849ae18c08bc4929c08d43f3eacabd16e\": rpc error: code = NotFound desc = could not find container \"ad203be4becd92c344ee1bad6dee987849ae18c08bc4929c08d43f3eacabd16e\": container with ID starting with ad203be4becd92c344ee1bad6dee987849ae18c08bc4929c08d43f3eacabd16e not found: ID does not exist" Jan 31 05:00:17 crc kubenswrapper[4832]: E0131 05:00:17.000304 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487160a9_724e_4892_a8e6_886547709572.slice/crio-5050fc9508d1662a638b588dc05e865586d33162592a490cbd9f6bb461fec559\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487160a9_724e_4892_a8e6_886547709572.slice\": RecentStats: unable to find data in memory cache]" Jan 31 05:00:17 crc kubenswrapper[4832]: I0131 05:00:17.190163 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8","Type":"ContainerStarted","Data":"774e61f78cd3e3fe3cce8dda26d9a0042fdbe1eddd56a688d6f38cbc4841fca4"} Jan 31 05:00:17 crc kubenswrapper[4832]: I0131 05:00:17.190409 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8","Type":"ContainerStarted","Data":"fe0a66daa056da81b78715a9d9c95888d4eda8a8ca3c8e57d6b1aa5143280d0c"} Jan 31 05:00:17 crc kubenswrapper[4832]: I0131 05:00:17.191734 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 31 05:00:17 crc kubenswrapper[4832]: I0131 05:00:17.228485 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.091452371 podStartE2EDuration="3.228452149s" podCreationTimestamp="2026-01-31 05:00:14 +0000 UTC" firstStartedPulling="2026-01-31 05:00:15.310845567 +0000 UTC m=+1024.259667252" lastFinishedPulling="2026-01-31 05:00:16.447845345 +0000 UTC m=+1025.396667030" observedRunningTime="2026-01-31 05:00:17.211040108 +0000 UTC m=+1026.159861803" watchObservedRunningTime="2026-01-31 05:00:17.228452149 +0000 UTC m=+1026.177273834" Jan 31 05:00:17 crc kubenswrapper[4832]: I0131 05:00:17.869999 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab8bc39d-fca0-40e4-8077-fb26c48b89e2" path="/var/lib/kubelet/pods/ab8bc39d-fca0-40e4-8077-fb26c48b89e2/volumes" Jan 31 05:00:19 crc kubenswrapper[4832]: I0131 05:00:19.285353 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 31 05:00:19 crc kubenswrapper[4832]: I0131 05:00:19.285743 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 31 05:00:20 crc kubenswrapper[4832]: I0131 05:00:20.392801 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 31 05:00:20 crc kubenswrapper[4832]: I0131 05:00:20.393552 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 31 05:00:20 crc kubenswrapper[4832]: I0131 05:00:20.478002 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 31 05:00:21 crc kubenswrapper[4832]: I0131 05:00:21.310219 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 31 05:00:21 crc kubenswrapper[4832]: I0131 05:00:21.692801 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 31 05:00:21 crc kubenswrapper[4832]: I0131 05:00:21.782550 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 31 05:00:22 crc kubenswrapper[4832]: I0131 05:00:22.882519 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8lwb5"] Jan 31 05:00:22 crc kubenswrapper[4832]: E0131 05:00:22.883007 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8bc39d-fca0-40e4-8077-fb26c48b89e2" containerName="dnsmasq-dns" Jan 31 05:00:22 crc kubenswrapper[4832]: I0131 05:00:22.883028 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8bc39d-fca0-40e4-8077-fb26c48b89e2" containerName="dnsmasq-dns" Jan 31 05:00:22 crc kubenswrapper[4832]: E0131 05:00:22.883058 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8bc39d-fca0-40e4-8077-fb26c48b89e2" containerName="init" Jan 31 05:00:22 crc kubenswrapper[4832]: I0131 05:00:22.883066 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8bc39d-fca0-40e4-8077-fb26c48b89e2" containerName="init" Jan 31 05:00:22 crc kubenswrapper[4832]: I0131 05:00:22.883234 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab8bc39d-fca0-40e4-8077-fb26c48b89e2" containerName="dnsmasq-dns" Jan 31 05:00:22 crc kubenswrapper[4832]: I0131 05:00:22.890514 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 05:00:22 crc kubenswrapper[4832]: I0131 05:00:22.890669 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:22 crc kubenswrapper[4832]: I0131 05:00:22.912059 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-8lwb5\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:22 crc kubenswrapper[4832]: I0131 05:00:22.912116 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-config\") pod \"dnsmasq-dns-b8fbc5445-8lwb5\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:22 crc kubenswrapper[4832]: I0131 05:00:22.912447 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8vs8\" (UniqueName: \"kubernetes.io/projected/39e727aa-9180-4ebf-af96-20abf1d96bea-kube-api-access-q8vs8\") pod \"dnsmasq-dns-b8fbc5445-8lwb5\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:22 crc kubenswrapper[4832]: I0131 05:00:22.912621 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-8lwb5\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:22 crc kubenswrapper[4832]: I0131 05:00:22.912732 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-8lwb5\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:22 crc kubenswrapper[4832]: I0131 05:00:22.914781 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8lwb5"] Jan 31 05:00:23 crc kubenswrapper[4832]: I0131 05:00:23.014719 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8vs8\" (UniqueName: \"kubernetes.io/projected/39e727aa-9180-4ebf-af96-20abf1d96bea-kube-api-access-q8vs8\") pod \"dnsmasq-dns-b8fbc5445-8lwb5\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:23 crc kubenswrapper[4832]: I0131 05:00:23.014778 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-8lwb5\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:23 crc kubenswrapper[4832]: I0131 05:00:23.014811 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-8lwb5\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:23 crc kubenswrapper[4832]: I0131 05:00:23.015741 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-8lwb5\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:23 crc kubenswrapper[4832]: I0131 05:00:23.015909 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-8lwb5\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:23 crc kubenswrapper[4832]: I0131 05:00:23.016973 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-8lwb5\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:23 crc kubenswrapper[4832]: I0131 05:00:23.016997 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-config\") pod \"dnsmasq-dns-b8fbc5445-8lwb5\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:23 crc kubenswrapper[4832]: I0131 05:00:23.017677 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-config\") pod \"dnsmasq-dns-b8fbc5445-8lwb5\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:23 crc kubenswrapper[4832]: I0131 05:00:23.017743 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-8lwb5\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:23 crc kubenswrapper[4832]: I0131 05:00:23.038620 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8vs8\" (UniqueName: \"kubernetes.io/projected/39e727aa-9180-4ebf-af96-20abf1d96bea-kube-api-access-q8vs8\") pod \"dnsmasq-dns-b8fbc5445-8lwb5\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:23 crc kubenswrapper[4832]: I0131 05:00:23.214287 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:23 crc kubenswrapper[4832]: I0131 05:00:23.778106 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8lwb5"] Jan 31 05:00:23 crc kubenswrapper[4832]: W0131 05:00:23.781214 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39e727aa_9180_4ebf_af96_20abf1d96bea.slice/crio-1db3f11ccd7930dbcb469d4282a0a7b17f7bdddea720f84d321761bf00baf3a9 WatchSource:0}: Error finding container 1db3f11ccd7930dbcb469d4282a0a7b17f7bdddea720f84d321761bf00baf3a9: Status 404 returned error can't find the container with id 1db3f11ccd7930dbcb469d4282a0a7b17f7bdddea720f84d321761bf00baf3a9 Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.104301 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.116603 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.119740 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-j2752" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.119809 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.120451 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.120493 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.127254 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.261957 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.262021 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd22j\" (UniqueName: \"kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-kube-api-access-jd22j\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.262055 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.262092 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.262131 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-cache\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.262148 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-lock\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.267641 4832 generic.go:334] "Generic (PLEG): container finished" podID="39e727aa-9180-4ebf-af96-20abf1d96bea" containerID="f610b828839772cabec873637cdd80e941d787ed37c3c4aa27f4ec8666384255" exitCode=0 Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.267864 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" event={"ID":"39e727aa-9180-4ebf-af96-20abf1d96bea","Type":"ContainerDied","Data":"f610b828839772cabec873637cdd80e941d787ed37c3c4aa27f4ec8666384255"} Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.268216 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" event={"ID":"39e727aa-9180-4ebf-af96-20abf1d96bea","Type":"ContainerStarted","Data":"1db3f11ccd7930dbcb469d4282a0a7b17f7bdddea720f84d321761bf00baf3a9"} Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.364051 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.364352 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd22j\" (UniqueName: \"kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-kube-api-access-jd22j\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.364521 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: E0131 05:00:24.364388 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 05:00:24 crc kubenswrapper[4832]: E0131 05:00:24.364723 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 05:00:24 crc kubenswrapper[4832]: E0131 05:00:24.364770 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift podName:e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087 nodeName:}" failed. No retries permitted until 2026-01-31 05:00:24.86475097 +0000 UTC m=+1033.813572655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift") pod "swift-storage-0" (UID: "e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087") : configmap "swift-ring-files" not found Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.364955 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.365665 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-cache\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.365815 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-lock\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.366701 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-lock\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.366549 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-cache\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.365534 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.373866 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.390358 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd22j\" (UniqueName: \"kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-kube-api-access-jd22j\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.406226 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.772152 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-6j6v7"] Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.773167 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.774869 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.775367 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.775633 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.833883 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6j6v7"] Jan 31 05:00:24 crc kubenswrapper[4832]: E0131 05:00:24.834572 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-mv6t2 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-6j6v7" podUID="b6a0f6df-55ea-40c0-94eb-3add56c481dc" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.859248 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-bwd8q"] Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.861398 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.868347 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bwd8q"] Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.875851 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-swiftconf\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.875899 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b6a0f6df-55ea-40c0-94eb-3add56c481dc-etc-swift\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.875914 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6j6v7"] Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.875945 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6a0f6df-55ea-40c0-94eb-3add56c481dc-scripts\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.876059 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv6t2\" (UniqueName: \"kubernetes.io/projected/b6a0f6df-55ea-40c0-94eb-3add56c481dc-kube-api-access-mv6t2\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.876276 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-combined-ca-bundle\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.876367 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.876389 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b6a0f6df-55ea-40c0-94eb-3add56c481dc-ring-data-devices\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.876443 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-dispersionconf\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: E0131 05:00:24.876713 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 05:00:24 crc kubenswrapper[4832]: E0131 05:00:24.876727 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 05:00:24 crc kubenswrapper[4832]: E0131 05:00:24.876771 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift podName:e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087 nodeName:}" failed. No retries permitted until 2026-01-31 05:00:25.876755162 +0000 UTC m=+1034.825576847 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift") pod "swift-storage-0" (UID: "e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087") : configmap "swift-ring-files" not found Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.978785 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-dispersionconf\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.978925 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-swiftconf\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.978980 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2d790d64-4815-452e-9f17-13b1b9b75c35-ring-data-devices\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.979014 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b6a0f6df-55ea-40c0-94eb-3add56c481dc-etc-swift\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.979047 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-dispersionconf\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.979161 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-swiftconf\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.979237 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvv5l\" (UniqueName: \"kubernetes.io/projected/2d790d64-4815-452e-9f17-13b1b9b75c35-kube-api-access-pvv5l\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.979386 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6a0f6df-55ea-40c0-94eb-3add56c481dc-scripts\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.979429 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv6t2\" (UniqueName: \"kubernetes.io/projected/b6a0f6df-55ea-40c0-94eb-3add56c481dc-kube-api-access-mv6t2\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.979531 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-combined-ca-bundle\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.979622 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d790d64-4815-452e-9f17-13b1b9b75c35-scripts\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.979699 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2d790d64-4815-452e-9f17-13b1b9b75c35-etc-swift\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.979715 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b6a0f6df-55ea-40c0-94eb-3add56c481dc-etc-swift\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.979759 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-combined-ca-bundle\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.979839 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b6a0f6df-55ea-40c0-94eb-3add56c481dc-ring-data-devices\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.980478 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6a0f6df-55ea-40c0-94eb-3add56c481dc-scripts\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.980620 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b6a0f6df-55ea-40c0-94eb-3add56c481dc-ring-data-devices\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.983208 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-dispersionconf\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.984342 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-swiftconf\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:24 crc kubenswrapper[4832]: I0131 05:00:24.984579 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-combined-ca-bundle\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.000011 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv6t2\" (UniqueName: \"kubernetes.io/projected/b6a0f6df-55ea-40c0-94eb-3add56c481dc-kube-api-access-mv6t2\") pod \"swift-ring-rebalance-6j6v7\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.082381 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-dispersionconf\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.082447 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-swiftconf\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.082474 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvv5l\" (UniqueName: \"kubernetes.io/projected/2d790d64-4815-452e-9f17-13b1b9b75c35-kube-api-access-pvv5l\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.082606 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-combined-ca-bundle\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.082649 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d790d64-4815-452e-9f17-13b1b9b75c35-scripts\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.082719 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2d790d64-4815-452e-9f17-13b1b9b75c35-etc-swift\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.083529 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2d790d64-4815-452e-9f17-13b1b9b75c35-etc-swift\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.083858 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d790d64-4815-452e-9f17-13b1b9b75c35-scripts\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.084093 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2d790d64-4815-452e-9f17-13b1b9b75c35-ring-data-devices\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.085004 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2d790d64-4815-452e-9f17-13b1b9b75c35-ring-data-devices\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.086422 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-dispersionconf\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.087001 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-swiftconf\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.088413 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-combined-ca-bundle\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.106788 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvv5l\" (UniqueName: \"kubernetes.io/projected/2d790d64-4815-452e-9f17-13b1b9b75c35-kube-api-access-pvv5l\") pod \"swift-ring-rebalance-bwd8q\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.181691 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.286676 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.286762 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" event={"ID":"39e727aa-9180-4ebf-af96-20abf1d96bea","Type":"ContainerStarted","Data":"e2f878d85001e88b170f1a3b82b187a445e5a69df7d135da3d7d9830228130ea"} Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.287308 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.316147 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.329225 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" podStartSLOduration=3.329201862 podStartE2EDuration="3.329201862s" podCreationTimestamp="2026-01-31 05:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:00:25.320983436 +0000 UTC m=+1034.269805141" watchObservedRunningTime="2026-01-31 05:00:25.329201862 +0000 UTC m=+1034.278023547" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.388914 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6a0f6df-55ea-40c0-94eb-3add56c481dc-scripts\") pod \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.388973 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b6a0f6df-55ea-40c0-94eb-3add56c481dc-ring-data-devices\") pod \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.389022 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-swiftconf\") pod \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.389164 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b6a0f6df-55ea-40c0-94eb-3add56c481dc-etc-swift\") pod \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.389206 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-dispersionconf\") pod \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.389300 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-combined-ca-bundle\") pod \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.389355 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv6t2\" (UniqueName: \"kubernetes.io/projected/b6a0f6df-55ea-40c0-94eb-3add56c481dc-kube-api-access-mv6t2\") pod \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\" (UID: \"b6a0f6df-55ea-40c0-94eb-3add56c481dc\") " Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.389772 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6a0f6df-55ea-40c0-94eb-3add56c481dc-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b6a0f6df-55ea-40c0-94eb-3add56c481dc" (UID: "b6a0f6df-55ea-40c0-94eb-3add56c481dc"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.389992 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6a0f6df-55ea-40c0-94eb-3add56c481dc-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b6a0f6df-55ea-40c0-94eb-3add56c481dc" (UID: "b6a0f6df-55ea-40c0-94eb-3add56c481dc"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.390061 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6a0f6df-55ea-40c0-94eb-3add56c481dc-scripts" (OuterVolumeSpecName: "scripts") pod "b6a0f6df-55ea-40c0-94eb-3add56c481dc" (UID: "b6a0f6df-55ea-40c0-94eb-3add56c481dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.390172 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6a0f6df-55ea-40c0-94eb-3add56c481dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.390189 4832 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b6a0f6df-55ea-40c0-94eb-3add56c481dc-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.390436 4832 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b6a0f6df-55ea-40c0-94eb-3add56c481dc-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.401835 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b6a0f6df-55ea-40c0-94eb-3add56c481dc" (UID: "b6a0f6df-55ea-40c0-94eb-3add56c481dc"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.401918 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b6a0f6df-55ea-40c0-94eb-3add56c481dc" (UID: "b6a0f6df-55ea-40c0-94eb-3add56c481dc"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.401964 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6a0f6df-55ea-40c0-94eb-3add56c481dc-kube-api-access-mv6t2" (OuterVolumeSpecName: "kube-api-access-mv6t2") pod "b6a0f6df-55ea-40c0-94eb-3add56c481dc" (UID: "b6a0f6df-55ea-40c0-94eb-3add56c481dc"). InnerVolumeSpecName "kube-api-access-mv6t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.408392 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6a0f6df-55ea-40c0-94eb-3add56c481dc" (UID: "b6a0f6df-55ea-40c0-94eb-3add56c481dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.492836 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.492879 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv6t2\" (UniqueName: \"kubernetes.io/projected/b6a0f6df-55ea-40c0-94eb-3add56c481dc-kube-api-access-mv6t2\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.492892 4832 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.492901 4832 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b6a0f6df-55ea-40c0-94eb-3add56c481dc-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.689986 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bwd8q"] Jan 31 05:00:25 crc kubenswrapper[4832]: W0131 05:00:25.693428 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d790d64_4815_452e_9f17_13b1b9b75c35.slice/crio-aed8111e2e2978624710840de73dfcfd58cc245548beeb2f81ff4374e8ed5f28 WatchSource:0}: Error finding container aed8111e2e2978624710840de73dfcfd58cc245548beeb2f81ff4374e8ed5f28: Status 404 returned error can't find the container with id aed8111e2e2978624710840de73dfcfd58cc245548beeb2f81ff4374e8ed5f28 Jan 31 05:00:25 crc kubenswrapper[4832]: I0131 05:00:25.900539 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:25 crc kubenswrapper[4832]: E0131 05:00:25.900812 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 05:00:25 crc kubenswrapper[4832]: E0131 05:00:25.900868 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 05:00:25 crc kubenswrapper[4832]: E0131 05:00:25.900972 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift podName:e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087 nodeName:}" failed. No retries permitted until 2026-01-31 05:00:27.900943432 +0000 UTC m=+1036.849765127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift") pod "swift-storage-0" (UID: "e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087") : configmap "swift-ring-files" not found Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.078593 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-72zzj"] Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.079675 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-72zzj" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.098204 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-72zzj"] Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.155883 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-40db-account-create-update-26dfv"] Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.159099 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-40db-account-create-update-26dfv" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.168279 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.197422 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-40db-account-create-update-26dfv"] Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.206760 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf835b76-ea7d-499a-b532-229186e9e54f-operator-scripts\") pod \"glance-db-create-72zzj\" (UID: \"cf835b76-ea7d-499a-b532-229186e9e54f\") " pod="openstack/glance-db-create-72zzj" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.207308 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcjjs\" (UniqueName: \"kubernetes.io/projected/cf835b76-ea7d-499a-b532-229186e9e54f-kube-api-access-zcjjs\") pod \"glance-db-create-72zzj\" (UID: \"cf835b76-ea7d-499a-b532-229186e9e54f\") " pod="openstack/glance-db-create-72zzj" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.300027 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6j6v7" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.300013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bwd8q" event={"ID":"2d790d64-4815-452e-9f17-13b1b9b75c35","Type":"ContainerStarted","Data":"aed8111e2e2978624710840de73dfcfd58cc245548beeb2f81ff4374e8ed5f28"} Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.308617 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf835b76-ea7d-499a-b532-229186e9e54f-operator-scripts\") pod \"glance-db-create-72zzj\" (UID: \"cf835b76-ea7d-499a-b532-229186e9e54f\") " pod="openstack/glance-db-create-72zzj" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.308768 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b9dc67-5aba-4ee4-9771-78e9ad27206c-operator-scripts\") pod \"glance-40db-account-create-update-26dfv\" (UID: \"c7b9dc67-5aba-4ee4-9771-78e9ad27206c\") " pod="openstack/glance-40db-account-create-update-26dfv" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.308918 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcjjs\" (UniqueName: \"kubernetes.io/projected/cf835b76-ea7d-499a-b532-229186e9e54f-kube-api-access-zcjjs\") pod \"glance-db-create-72zzj\" (UID: \"cf835b76-ea7d-499a-b532-229186e9e54f\") " pod="openstack/glance-db-create-72zzj" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.308959 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbrsc\" (UniqueName: \"kubernetes.io/projected/c7b9dc67-5aba-4ee4-9771-78e9ad27206c-kube-api-access-lbrsc\") pod \"glance-40db-account-create-update-26dfv\" (UID: \"c7b9dc67-5aba-4ee4-9771-78e9ad27206c\") " pod="openstack/glance-40db-account-create-update-26dfv" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.309533 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf835b76-ea7d-499a-b532-229186e9e54f-operator-scripts\") pod \"glance-db-create-72zzj\" (UID: \"cf835b76-ea7d-499a-b532-229186e9e54f\") " pod="openstack/glance-db-create-72zzj" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.342946 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcjjs\" (UniqueName: \"kubernetes.io/projected/cf835b76-ea7d-499a-b532-229186e9e54f-kube-api-access-zcjjs\") pod \"glance-db-create-72zzj\" (UID: \"cf835b76-ea7d-499a-b532-229186e9e54f\") " pod="openstack/glance-db-create-72zzj" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.358777 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6j6v7"] Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.365344 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-6j6v7"] Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.406609 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-72zzj" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.410684 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbrsc\" (UniqueName: \"kubernetes.io/projected/c7b9dc67-5aba-4ee4-9771-78e9ad27206c-kube-api-access-lbrsc\") pod \"glance-40db-account-create-update-26dfv\" (UID: \"c7b9dc67-5aba-4ee4-9771-78e9ad27206c\") " pod="openstack/glance-40db-account-create-update-26dfv" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.410985 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b9dc67-5aba-4ee4-9771-78e9ad27206c-operator-scripts\") pod \"glance-40db-account-create-update-26dfv\" (UID: \"c7b9dc67-5aba-4ee4-9771-78e9ad27206c\") " pod="openstack/glance-40db-account-create-update-26dfv" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.412455 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b9dc67-5aba-4ee4-9771-78e9ad27206c-operator-scripts\") pod \"glance-40db-account-create-update-26dfv\" (UID: \"c7b9dc67-5aba-4ee4-9771-78e9ad27206c\") " pod="openstack/glance-40db-account-create-update-26dfv" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.434328 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbrsc\" (UniqueName: \"kubernetes.io/projected/c7b9dc67-5aba-4ee4-9771-78e9ad27206c-kube-api-access-lbrsc\") pod \"glance-40db-account-create-update-26dfv\" (UID: \"c7b9dc67-5aba-4ee4-9771-78e9ad27206c\") " pod="openstack/glance-40db-account-create-update-26dfv" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.500436 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-40db-account-create-update-26dfv" Jan 31 05:00:26 crc kubenswrapper[4832]: I0131 05:00:26.937837 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-72zzj"] Jan 31 05:00:26 crc kubenswrapper[4832]: W0131 05:00:26.948003 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf835b76_ea7d_499a_b532_229186e9e54f.slice/crio-3f97e00527fa1bc1d9b14e7afb30ec3bbf2bf62aa3b77da467bad6570e3a0727 WatchSource:0}: Error finding container 3f97e00527fa1bc1d9b14e7afb30ec3bbf2bf62aa3b77da467bad6570e3a0727: Status 404 returned error can't find the container with id 3f97e00527fa1bc1d9b14e7afb30ec3bbf2bf62aa3b77da467bad6570e3a0727 Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.082105 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-40db-account-create-update-26dfv"] Jan 31 05:00:27 crc kubenswrapper[4832]: W0131 05:00:27.085227 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b9dc67_5aba_4ee4_9771_78e9ad27206c.slice/crio-eccc874062bc90b559dbe01e489e2102597aceae5312a9d61c5e4f5c98e40ce8 WatchSource:0}: Error finding container eccc874062bc90b559dbe01e489e2102597aceae5312a9d61c5e4f5c98e40ce8: Status 404 returned error can't find the container with id eccc874062bc90b559dbe01e489e2102597aceae5312a9d61c5e4f5c98e40ce8 Jan 31 05:00:27 crc kubenswrapper[4832]: E0131 05:00:27.241667 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487160a9_724e_4892_a8e6_886547709572.slice/crio-5050fc9508d1662a638b588dc05e865586d33162592a490cbd9f6bb461fec559\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487160a9_724e_4892_a8e6_886547709572.slice\": RecentStats: unable to find data in memory cache]" Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.311259 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-40db-account-create-update-26dfv" event={"ID":"c7b9dc67-5aba-4ee4-9771-78e9ad27206c","Type":"ContainerStarted","Data":"98db13dbe1176e447b3199cd7e1f4dcb0f0be4fd50b1f933d8e0bd98860dc12b"} Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.311315 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-40db-account-create-update-26dfv" event={"ID":"c7b9dc67-5aba-4ee4-9771-78e9ad27206c","Type":"ContainerStarted","Data":"eccc874062bc90b559dbe01e489e2102597aceae5312a9d61c5e4f5c98e40ce8"} Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.312499 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-72zzj" event={"ID":"cf835b76-ea7d-499a-b532-229186e9e54f","Type":"ContainerStarted","Data":"7ab17ce58d44c113ee06e7986521b56c6c12f4da917ece9486b834430f041ea4"} Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.312576 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-72zzj" event={"ID":"cf835b76-ea7d-499a-b532-229186e9e54f","Type":"ContainerStarted","Data":"3f97e00527fa1bc1d9b14e7afb30ec3bbf2bf62aa3b77da467bad6570e3a0727"} Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.338908 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-40db-account-create-update-26dfv" podStartSLOduration=1.3388867979999999 podStartE2EDuration="1.338886798s" podCreationTimestamp="2026-01-31 05:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:00:27.338299239 +0000 UTC m=+1036.287120924" watchObservedRunningTime="2026-01-31 05:00:27.338886798 +0000 UTC m=+1036.287708483" Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.365213 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-72zzj" podStartSLOduration=1.365182836 podStartE2EDuration="1.365182836s" podCreationTimestamp="2026-01-31 05:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:00:27.35888334 +0000 UTC m=+1036.307705045" watchObservedRunningTime="2026-01-31 05:00:27.365182836 +0000 UTC m=+1036.314004521" Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.766360 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-642kb"] Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.767868 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-642kb" Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.781451 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.782728 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-642kb"] Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.853619 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc75f3e3-81cc-4a54-a2e8-de839c8fac24-operator-scripts\") pod \"root-account-create-update-642kb\" (UID: \"cc75f3e3-81cc-4a54-a2e8-de839c8fac24\") " pod="openstack/root-account-create-update-642kb" Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.853736 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skqfc\" (UniqueName: \"kubernetes.io/projected/cc75f3e3-81cc-4a54-a2e8-de839c8fac24-kube-api-access-skqfc\") pod \"root-account-create-update-642kb\" (UID: \"cc75f3e3-81cc-4a54-a2e8-de839c8fac24\") " pod="openstack/root-account-create-update-642kb" Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.873884 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6a0f6df-55ea-40c0-94eb-3add56c481dc" path="/var/lib/kubelet/pods/b6a0f6df-55ea-40c0-94eb-3add56c481dc/volumes" Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.955882 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:27 crc kubenswrapper[4832]: E0131 05:00:27.956201 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 05:00:27 crc kubenswrapper[4832]: E0131 05:00:27.956359 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 05:00:27 crc kubenswrapper[4832]: E0131 05:00:27.956428 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift podName:e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087 nodeName:}" failed. No retries permitted until 2026-01-31 05:00:31.956408522 +0000 UTC m=+1040.905230207 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift") pod "swift-storage-0" (UID: "e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087") : configmap "swift-ring-files" not found Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.956825 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc75f3e3-81cc-4a54-a2e8-de839c8fac24-operator-scripts\") pod \"root-account-create-update-642kb\" (UID: \"cc75f3e3-81cc-4a54-a2e8-de839c8fac24\") " pod="openstack/root-account-create-update-642kb" Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.958247 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc75f3e3-81cc-4a54-a2e8-de839c8fac24-operator-scripts\") pod \"root-account-create-update-642kb\" (UID: \"cc75f3e3-81cc-4a54-a2e8-de839c8fac24\") " pod="openstack/root-account-create-update-642kb" Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.958612 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skqfc\" (UniqueName: \"kubernetes.io/projected/cc75f3e3-81cc-4a54-a2e8-de839c8fac24-kube-api-access-skqfc\") pod \"root-account-create-update-642kb\" (UID: \"cc75f3e3-81cc-4a54-a2e8-de839c8fac24\") " pod="openstack/root-account-create-update-642kb" Jan 31 05:00:27 crc kubenswrapper[4832]: I0131 05:00:27.983945 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skqfc\" (UniqueName: \"kubernetes.io/projected/cc75f3e3-81cc-4a54-a2e8-de839c8fac24-kube-api-access-skqfc\") pod \"root-account-create-update-642kb\" (UID: \"cc75f3e3-81cc-4a54-a2e8-de839c8fac24\") " pod="openstack/root-account-create-update-642kb" Jan 31 05:00:28 crc kubenswrapper[4832]: I0131 05:00:28.103447 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-642kb" Jan 31 05:00:28 crc kubenswrapper[4832]: I0131 05:00:28.338494 4832 generic.go:334] "Generic (PLEG): container finished" podID="c7b9dc67-5aba-4ee4-9771-78e9ad27206c" containerID="98db13dbe1176e447b3199cd7e1f4dcb0f0be4fd50b1f933d8e0bd98860dc12b" exitCode=0 Jan 31 05:00:28 crc kubenswrapper[4832]: I0131 05:00:28.338615 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-40db-account-create-update-26dfv" event={"ID":"c7b9dc67-5aba-4ee4-9771-78e9ad27206c","Type":"ContainerDied","Data":"98db13dbe1176e447b3199cd7e1f4dcb0f0be4fd50b1f933d8e0bd98860dc12b"} Jan 31 05:00:28 crc kubenswrapper[4832]: I0131 05:00:28.344373 4832 generic.go:334] "Generic (PLEG): container finished" podID="cf835b76-ea7d-499a-b532-229186e9e54f" containerID="7ab17ce58d44c113ee06e7986521b56c6c12f4da917ece9486b834430f041ea4" exitCode=0 Jan 31 05:00:28 crc kubenswrapper[4832]: I0131 05:00:28.344411 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-72zzj" event={"ID":"cf835b76-ea7d-499a-b532-229186e9e54f","Type":"ContainerDied","Data":"7ab17ce58d44c113ee06e7986521b56c6c12f4da917ece9486b834430f041ea4"} Jan 31 05:00:29 crc kubenswrapper[4832]: I0131 05:00:29.974770 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-72zzj" Jan 31 05:00:29 crc kubenswrapper[4832]: I0131 05:00:29.982070 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-40db-account-create-update-26dfv" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.129086 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcjjs\" (UniqueName: \"kubernetes.io/projected/cf835b76-ea7d-499a-b532-229186e9e54f-kube-api-access-zcjjs\") pod \"cf835b76-ea7d-499a-b532-229186e9e54f\" (UID: \"cf835b76-ea7d-499a-b532-229186e9e54f\") " Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.129150 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbrsc\" (UniqueName: \"kubernetes.io/projected/c7b9dc67-5aba-4ee4-9771-78e9ad27206c-kube-api-access-lbrsc\") pod \"c7b9dc67-5aba-4ee4-9771-78e9ad27206c\" (UID: \"c7b9dc67-5aba-4ee4-9771-78e9ad27206c\") " Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.129249 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf835b76-ea7d-499a-b532-229186e9e54f-operator-scripts\") pod \"cf835b76-ea7d-499a-b532-229186e9e54f\" (UID: \"cf835b76-ea7d-499a-b532-229186e9e54f\") " Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.129309 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b9dc67-5aba-4ee4-9771-78e9ad27206c-operator-scripts\") pod \"c7b9dc67-5aba-4ee4-9771-78e9ad27206c\" (UID: \"c7b9dc67-5aba-4ee4-9771-78e9ad27206c\") " Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.130397 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf835b76-ea7d-499a-b532-229186e9e54f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf835b76-ea7d-499a-b532-229186e9e54f" (UID: "cf835b76-ea7d-499a-b532-229186e9e54f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.131891 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b9dc67-5aba-4ee4-9771-78e9ad27206c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7b9dc67-5aba-4ee4-9771-78e9ad27206c" (UID: "c7b9dc67-5aba-4ee4-9771-78e9ad27206c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.133405 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf835b76-ea7d-499a-b532-229186e9e54f-kube-api-access-zcjjs" (OuterVolumeSpecName: "kube-api-access-zcjjs") pod "cf835b76-ea7d-499a-b532-229186e9e54f" (UID: "cf835b76-ea7d-499a-b532-229186e9e54f"). InnerVolumeSpecName "kube-api-access-zcjjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.134399 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b9dc67-5aba-4ee4-9771-78e9ad27206c-kube-api-access-lbrsc" (OuterVolumeSpecName: "kube-api-access-lbrsc") pod "c7b9dc67-5aba-4ee4-9771-78e9ad27206c" (UID: "c7b9dc67-5aba-4ee4-9771-78e9ad27206c"). InnerVolumeSpecName "kube-api-access-lbrsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.233231 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b9dc67-5aba-4ee4-9771-78e9ad27206c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.233268 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcjjs\" (UniqueName: \"kubernetes.io/projected/cf835b76-ea7d-499a-b532-229186e9e54f-kube-api-access-zcjjs\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.233283 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbrsc\" (UniqueName: \"kubernetes.io/projected/c7b9dc67-5aba-4ee4-9771-78e9ad27206c-kube-api-access-lbrsc\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.233294 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf835b76-ea7d-499a-b532-229186e9e54f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.476020 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-72zzj" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.476865 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-72zzj" event={"ID":"cf835b76-ea7d-499a-b532-229186e9e54f","Type":"ContainerDied","Data":"3f97e00527fa1bc1d9b14e7afb30ec3bbf2bf62aa3b77da467bad6570e3a0727"} Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.476904 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f97e00527fa1bc1d9b14e7afb30ec3bbf2bf62aa3b77da467bad6570e3a0727" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.485063 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-m4zgg"] Jan 31 05:00:30 crc kubenswrapper[4832]: E0131 05:00:30.485490 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf835b76-ea7d-499a-b532-229186e9e54f" containerName="mariadb-database-create" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.485510 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf835b76-ea7d-499a-b532-229186e9e54f" containerName="mariadb-database-create" Jan 31 05:00:30 crc kubenswrapper[4832]: E0131 05:00:30.485521 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b9dc67-5aba-4ee4-9771-78e9ad27206c" containerName="mariadb-account-create-update" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.485532 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b9dc67-5aba-4ee4-9771-78e9ad27206c" containerName="mariadb-account-create-update" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.485700 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf835b76-ea7d-499a-b532-229186e9e54f" containerName="mariadb-database-create" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.485721 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b9dc67-5aba-4ee4-9771-78e9ad27206c" containerName="mariadb-account-create-update" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.486647 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-m4zgg" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.492204 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-40db-account-create-update-26dfv" event={"ID":"c7b9dc67-5aba-4ee4-9771-78e9ad27206c","Type":"ContainerDied","Data":"eccc874062bc90b559dbe01e489e2102597aceae5312a9d61c5e4f5c98e40ce8"} Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.492255 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eccc874062bc90b559dbe01e489e2102597aceae5312a9d61c5e4f5c98e40ce8" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.492319 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-40db-account-create-update-26dfv" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.509166 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-m4zgg"] Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.510276 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bwd8q" event={"ID":"2d790d64-4815-452e-9f17-13b1b9b75c35","Type":"ContainerStarted","Data":"380d6e93d2c7c97991597565b9dfed5b2946cbd1bc5da9a5c16a999c36c033e0"} Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.529824 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-642kb"] Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.543577 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-bwd8q" podStartSLOduration=2.280737764 podStartE2EDuration="6.543531975s" podCreationTimestamp="2026-01-31 05:00:24 +0000 UTC" firstStartedPulling="2026-01-31 05:00:25.69605005 +0000 UTC m=+1034.644871735" lastFinishedPulling="2026-01-31 05:00:29.958844261 +0000 UTC m=+1038.907665946" observedRunningTime="2026-01-31 05:00:30.541880673 +0000 UTC m=+1039.490702358" watchObservedRunningTime="2026-01-31 05:00:30.543531975 +0000 UTC m=+1039.492353660" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.578522 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-67f6-account-create-update-b6kd5"] Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.585926 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67f6-account-create-update-b6kd5" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.590703 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67f6-account-create-update-b6kd5"] Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.591354 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.644718 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72ffd399-2a10-4bc8-a99b-7b1f472193bc-operator-scripts\") pod \"keystone-db-create-m4zgg\" (UID: \"72ffd399-2a10-4bc8-a99b-7b1f472193bc\") " pod="openstack/keystone-db-create-m4zgg" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.644871 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9bb9\" (UniqueName: \"kubernetes.io/projected/72ffd399-2a10-4bc8-a99b-7b1f472193bc-kube-api-access-c9bb9\") pod \"keystone-db-create-m4zgg\" (UID: \"72ffd399-2a10-4bc8-a99b-7b1f472193bc\") " pod="openstack/keystone-db-create-m4zgg" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.724094 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-4shhs"] Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.728800 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4shhs" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.739162 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4shhs"] Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.746933 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9bb9\" (UniqueName: \"kubernetes.io/projected/72ffd399-2a10-4bc8-a99b-7b1f472193bc-kube-api-access-c9bb9\") pod \"keystone-db-create-m4zgg\" (UID: \"72ffd399-2a10-4bc8-a99b-7b1f472193bc\") " pod="openstack/keystone-db-create-m4zgg" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.747099 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb678\" (UniqueName: \"kubernetes.io/projected/11927687-2cdb-407c-900c-6ed0a23438a9-kube-api-access-mb678\") pod \"keystone-67f6-account-create-update-b6kd5\" (UID: \"11927687-2cdb-407c-900c-6ed0a23438a9\") " pod="openstack/keystone-67f6-account-create-update-b6kd5" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.747168 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11927687-2cdb-407c-900c-6ed0a23438a9-operator-scripts\") pod \"keystone-67f6-account-create-update-b6kd5\" (UID: \"11927687-2cdb-407c-900c-6ed0a23438a9\") " pod="openstack/keystone-67f6-account-create-update-b6kd5" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.747205 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72ffd399-2a10-4bc8-a99b-7b1f472193bc-operator-scripts\") pod \"keystone-db-create-m4zgg\" (UID: \"72ffd399-2a10-4bc8-a99b-7b1f472193bc\") " pod="openstack/keystone-db-create-m4zgg" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.748007 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72ffd399-2a10-4bc8-a99b-7b1f472193bc-operator-scripts\") pod \"keystone-db-create-m4zgg\" (UID: \"72ffd399-2a10-4bc8-a99b-7b1f472193bc\") " pod="openstack/keystone-db-create-m4zgg" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.779923 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9bb9\" (UniqueName: \"kubernetes.io/projected/72ffd399-2a10-4bc8-a99b-7b1f472193bc-kube-api-access-c9bb9\") pod \"keystone-db-create-m4zgg\" (UID: \"72ffd399-2a10-4bc8-a99b-7b1f472193bc\") " pod="openstack/keystone-db-create-m4zgg" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.780004 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-167e-account-create-update-2qthj"] Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.781094 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-167e-account-create-update-2qthj" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.790506 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-167e-account-create-update-2qthj"] Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.807996 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.825203 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-m4zgg" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.849419 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb678\" (UniqueName: \"kubernetes.io/projected/11927687-2cdb-407c-900c-6ed0a23438a9-kube-api-access-mb678\") pod \"keystone-67f6-account-create-update-b6kd5\" (UID: \"11927687-2cdb-407c-900c-6ed0a23438a9\") " pod="openstack/keystone-67f6-account-create-update-b6kd5" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.849808 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889-operator-scripts\") pod \"placement-167e-account-create-update-2qthj\" (UID: \"cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889\") " pod="openstack/placement-167e-account-create-update-2qthj" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.850256 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11927687-2cdb-407c-900c-6ed0a23438a9-operator-scripts\") pod \"keystone-67f6-account-create-update-b6kd5\" (UID: \"11927687-2cdb-407c-900c-6ed0a23438a9\") " pod="openstack/keystone-67f6-account-create-update-b6kd5" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.850456 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7cxc\" (UniqueName: \"kubernetes.io/projected/30d3f7cb-9922-4e2e-91e3-7a14015cadc2-kube-api-access-z7cxc\") pod \"placement-db-create-4shhs\" (UID: \"30d3f7cb-9922-4e2e-91e3-7a14015cadc2\") " pod="openstack/placement-db-create-4shhs" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.850978 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30d3f7cb-9922-4e2e-91e3-7a14015cadc2-operator-scripts\") pod \"placement-db-create-4shhs\" (UID: \"30d3f7cb-9922-4e2e-91e3-7a14015cadc2\") " pod="openstack/placement-db-create-4shhs" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.851013 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vjwb\" (UniqueName: \"kubernetes.io/projected/cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889-kube-api-access-6vjwb\") pod \"placement-167e-account-create-update-2qthj\" (UID: \"cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889\") " pod="openstack/placement-167e-account-create-update-2qthj" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.851204 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11927687-2cdb-407c-900c-6ed0a23438a9-operator-scripts\") pod \"keystone-67f6-account-create-update-b6kd5\" (UID: \"11927687-2cdb-407c-900c-6ed0a23438a9\") " pod="openstack/keystone-67f6-account-create-update-b6kd5" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.874354 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb678\" (UniqueName: \"kubernetes.io/projected/11927687-2cdb-407c-900c-6ed0a23438a9-kube-api-access-mb678\") pod \"keystone-67f6-account-create-update-b6kd5\" (UID: \"11927687-2cdb-407c-900c-6ed0a23438a9\") " pod="openstack/keystone-67f6-account-create-update-b6kd5" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.952521 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67f6-account-create-update-b6kd5" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.953649 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7cxc\" (UniqueName: \"kubernetes.io/projected/30d3f7cb-9922-4e2e-91e3-7a14015cadc2-kube-api-access-z7cxc\") pod \"placement-db-create-4shhs\" (UID: \"30d3f7cb-9922-4e2e-91e3-7a14015cadc2\") " pod="openstack/placement-db-create-4shhs" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.953760 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30d3f7cb-9922-4e2e-91e3-7a14015cadc2-operator-scripts\") pod \"placement-db-create-4shhs\" (UID: \"30d3f7cb-9922-4e2e-91e3-7a14015cadc2\") " pod="openstack/placement-db-create-4shhs" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.953789 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vjwb\" (UniqueName: \"kubernetes.io/projected/cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889-kube-api-access-6vjwb\") pod \"placement-167e-account-create-update-2qthj\" (UID: \"cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889\") " pod="openstack/placement-167e-account-create-update-2qthj" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.953895 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889-operator-scripts\") pod \"placement-167e-account-create-update-2qthj\" (UID: \"cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889\") " pod="openstack/placement-167e-account-create-update-2qthj" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.954854 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889-operator-scripts\") pod \"placement-167e-account-create-update-2qthj\" (UID: \"cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889\") " pod="openstack/placement-167e-account-create-update-2qthj" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.954869 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30d3f7cb-9922-4e2e-91e3-7a14015cadc2-operator-scripts\") pod \"placement-db-create-4shhs\" (UID: \"30d3f7cb-9922-4e2e-91e3-7a14015cadc2\") " pod="openstack/placement-db-create-4shhs" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.976450 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7cxc\" (UniqueName: \"kubernetes.io/projected/30d3f7cb-9922-4e2e-91e3-7a14015cadc2-kube-api-access-z7cxc\") pod \"placement-db-create-4shhs\" (UID: \"30d3f7cb-9922-4e2e-91e3-7a14015cadc2\") " pod="openstack/placement-db-create-4shhs" Jan 31 05:00:30 crc kubenswrapper[4832]: I0131 05:00:30.976460 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vjwb\" (UniqueName: \"kubernetes.io/projected/cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889-kube-api-access-6vjwb\") pod \"placement-167e-account-create-update-2qthj\" (UID: \"cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889\") " pod="openstack/placement-167e-account-create-update-2qthj" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.237600 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4shhs" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.254135 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-167e-account-create-update-2qthj" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.331338 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-m4zgg"] Jan 31 05:00:31 crc kubenswrapper[4832]: W0131 05:00:31.344204 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72ffd399_2a10_4bc8_a99b_7b1f472193bc.slice/crio-820b8a361e054c233b47fcc65886b42567680e6c857c66e427219480fad3cfac WatchSource:0}: Error finding container 820b8a361e054c233b47fcc65886b42567680e6c857c66e427219480fad3cfac: Status 404 returned error can't find the container with id 820b8a361e054c233b47fcc65886b42567680e6c857c66e427219480fad3cfac Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.438897 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-92w4s"] Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.441797 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-92w4s" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.456007 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-92w4s"] Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.456180 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.456222 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pnlwb" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.503417 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67f6-account-create-update-b6kd5"] Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.522170 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-m4zgg" event={"ID":"72ffd399-2a10-4bc8-a99b-7b1f472193bc","Type":"ContainerStarted","Data":"820b8a361e054c233b47fcc65886b42567680e6c857c66e427219480fad3cfac"} Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.526918 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67f6-account-create-update-b6kd5" event={"ID":"11927687-2cdb-407c-900c-6ed0a23438a9","Type":"ContainerStarted","Data":"e7fe174118c7ddc6dc6ad0351e06dd0318b3c0b596dccacb2b73e76122dc2033"} Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.538900 4832 generic.go:334] "Generic (PLEG): container finished" podID="cc75f3e3-81cc-4a54-a2e8-de839c8fac24" containerID="3f0e7b1a6323b3f1e897cd580643e2cb45b826cd414811d177c9d2e8a4776c3e" exitCode=0 Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.539006 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-642kb" event={"ID":"cc75f3e3-81cc-4a54-a2e8-de839c8fac24","Type":"ContainerDied","Data":"3f0e7b1a6323b3f1e897cd580643e2cb45b826cd414811d177c9d2e8a4776c3e"} Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.539061 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-642kb" event={"ID":"cc75f3e3-81cc-4a54-a2e8-de839c8fac24","Type":"ContainerStarted","Data":"d93ae441dbb10cae93673e8c3969b9db2b32fd991a6704544bf219a099c47803"} Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.570183 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-config-data\") pod \"glance-db-sync-92w4s\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " pod="openstack/glance-db-sync-92w4s" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.570336 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-combined-ca-bundle\") pod \"glance-db-sync-92w4s\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " pod="openstack/glance-db-sync-92w4s" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.570376 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-db-sync-config-data\") pod \"glance-db-sync-92w4s\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " pod="openstack/glance-db-sync-92w4s" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.570452 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74dgp\" (UniqueName: \"kubernetes.io/projected/ffef7600-94e5-444a-be7e-215a512c0233-kube-api-access-74dgp\") pod \"glance-db-sync-92w4s\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " pod="openstack/glance-db-sync-92w4s" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.686084 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-combined-ca-bundle\") pod \"glance-db-sync-92w4s\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " pod="openstack/glance-db-sync-92w4s" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.686675 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-db-sync-config-data\") pod \"glance-db-sync-92w4s\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " pod="openstack/glance-db-sync-92w4s" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.687001 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74dgp\" (UniqueName: \"kubernetes.io/projected/ffef7600-94e5-444a-be7e-215a512c0233-kube-api-access-74dgp\") pod \"glance-db-sync-92w4s\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " pod="openstack/glance-db-sync-92w4s" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.687135 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-config-data\") pod \"glance-db-sync-92w4s\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " pod="openstack/glance-db-sync-92w4s" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.698342 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-config-data\") pod \"glance-db-sync-92w4s\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " pod="openstack/glance-db-sync-92w4s" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.699970 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-combined-ca-bundle\") pod \"glance-db-sync-92w4s\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " pod="openstack/glance-db-sync-92w4s" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.704451 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-db-sync-config-data\") pod \"glance-db-sync-92w4s\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " pod="openstack/glance-db-sync-92w4s" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.712290 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74dgp\" (UniqueName: \"kubernetes.io/projected/ffef7600-94e5-444a-be7e-215a512c0233-kube-api-access-74dgp\") pod \"glance-db-sync-92w4s\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " pod="openstack/glance-db-sync-92w4s" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.788105 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4shhs"] Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.788909 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-92w4s" Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.826909 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-167e-account-create-update-2qthj"] Jan 31 05:00:31 crc kubenswrapper[4832]: W0131 05:00:31.827241 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfaf1fcd_6b83_4fc4_a3eb_54a2c513d889.slice/crio-7e4c77f82f34c88271ff167fd40fac5e34b6b1d14f000c4ddb53c304b78cbf65 WatchSource:0}: Error finding container 7e4c77f82f34c88271ff167fd40fac5e34b6b1d14f000c4ddb53c304b78cbf65: Status 404 returned error can't find the container with id 7e4c77f82f34c88271ff167fd40fac5e34b6b1d14f000c4ddb53c304b78cbf65 Jan 31 05:00:31 crc kubenswrapper[4832]: I0131 05:00:31.993220 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:31 crc kubenswrapper[4832]: E0131 05:00:31.994967 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 05:00:31 crc kubenswrapper[4832]: E0131 05:00:31.995283 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 05:00:31 crc kubenswrapper[4832]: E0131 05:00:31.995477 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift podName:e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087 nodeName:}" failed. No retries permitted until 2026-01-31 05:00:39.995452465 +0000 UTC m=+1048.944274150 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift") pod "swift-storage-0" (UID: "e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087") : configmap "swift-ring-files" not found Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.417665 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-92w4s"] Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.548752 4832 generic.go:334] "Generic (PLEG): container finished" podID="11927687-2cdb-407c-900c-6ed0a23438a9" containerID="d019be7a2969f0359ad46a779bdd191b7e10d045e420e0eeacb7283ed983a3ab" exitCode=0 Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.548853 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67f6-account-create-update-b6kd5" event={"ID":"11927687-2cdb-407c-900c-6ed0a23438a9","Type":"ContainerDied","Data":"d019be7a2969f0359ad46a779bdd191b7e10d045e420e0eeacb7283ed983a3ab"} Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.551492 4832 generic.go:334] "Generic (PLEG): container finished" podID="cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889" containerID="60ce1c55ec6f2cd618e37fdfeb138e0ad43f70aa49603ee8cc8109e67016ddb6" exitCode=0 Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.551610 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-167e-account-create-update-2qthj" event={"ID":"cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889","Type":"ContainerDied","Data":"60ce1c55ec6f2cd618e37fdfeb138e0ad43f70aa49603ee8cc8109e67016ddb6"} Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.551665 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-167e-account-create-update-2qthj" event={"ID":"cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889","Type":"ContainerStarted","Data":"7e4c77f82f34c88271ff167fd40fac5e34b6b1d14f000c4ddb53c304b78cbf65"} Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.553727 4832 generic.go:334] "Generic (PLEG): container finished" podID="e4e0df3a-5b8c-43ad-b404-5a9716f774a6" containerID="4f6ee4af033e0bef4d17d30f09f5bca28c73e1fe4562858c0fe0a7a7c45afa50" exitCode=0 Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.553791 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e4e0df3a-5b8c-43ad-b404-5a9716f774a6","Type":"ContainerDied","Data":"4f6ee4af033e0bef4d17d30f09f5bca28c73e1fe4562858c0fe0a7a7c45afa50"} Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.557781 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-92w4s" event={"ID":"ffef7600-94e5-444a-be7e-215a512c0233","Type":"ContainerStarted","Data":"0111e56ea5be097eaa9f8223db2902db11d6c055c8f33e90a5d16d35ce00e09a"} Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.559918 4832 generic.go:334] "Generic (PLEG): container finished" podID="30d3f7cb-9922-4e2e-91e3-7a14015cadc2" containerID="14366572e215d4f19df3cc6d7a77394e35ec73188a127cc4fc777f7959c0bf06" exitCode=0 Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.560214 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4shhs" event={"ID":"30d3f7cb-9922-4e2e-91e3-7a14015cadc2","Type":"ContainerDied","Data":"14366572e215d4f19df3cc6d7a77394e35ec73188a127cc4fc777f7959c0bf06"} Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.560254 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4shhs" event={"ID":"30d3f7cb-9922-4e2e-91e3-7a14015cadc2","Type":"ContainerStarted","Data":"ce44420cad68ea1816efd8a5b68bb8334527dbcec4160356d9bf74c883cbf7a4"} Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.564466 4832 generic.go:334] "Generic (PLEG): container finished" podID="9a976894-0f59-4fb5-a297-c43c1bf88b47" containerID="45e8607d87db8649c987ea7771c353626689748228529fc17140f44f17fa1ea2" exitCode=0 Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.564552 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9a976894-0f59-4fb5-a297-c43c1bf88b47","Type":"ContainerDied","Data":"45e8607d87db8649c987ea7771c353626689748228529fc17140f44f17fa1ea2"} Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.569910 4832 generic.go:334] "Generic (PLEG): container finished" podID="72ffd399-2a10-4bc8-a99b-7b1f472193bc" containerID="9312524546f2f975c832ce2d431e9575dd4ec53bc3c7a0d330edaef8c7f8fa8e" exitCode=0 Jan 31 05:00:32 crc kubenswrapper[4832]: I0131 05:00:32.569976 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-m4zgg" event={"ID":"72ffd399-2a10-4bc8-a99b-7b1f472193bc","Type":"ContainerDied","Data":"9312524546f2f975c832ce2d431e9575dd4ec53bc3c7a0d330edaef8c7f8fa8e"} Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.012259 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-642kb" Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.137538 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skqfc\" (UniqueName: \"kubernetes.io/projected/cc75f3e3-81cc-4a54-a2e8-de839c8fac24-kube-api-access-skqfc\") pod \"cc75f3e3-81cc-4a54-a2e8-de839c8fac24\" (UID: \"cc75f3e3-81cc-4a54-a2e8-de839c8fac24\") " Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.138607 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc75f3e3-81cc-4a54-a2e8-de839c8fac24-operator-scripts\") pod \"cc75f3e3-81cc-4a54-a2e8-de839c8fac24\" (UID: \"cc75f3e3-81cc-4a54-a2e8-de839c8fac24\") " Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.139958 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc75f3e3-81cc-4a54-a2e8-de839c8fac24-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc75f3e3-81cc-4a54-a2e8-de839c8fac24" (UID: "cc75f3e3-81cc-4a54-a2e8-de839c8fac24"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.150385 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc75f3e3-81cc-4a54-a2e8-de839c8fac24-kube-api-access-skqfc" (OuterVolumeSpecName: "kube-api-access-skqfc") pod "cc75f3e3-81cc-4a54-a2e8-de839c8fac24" (UID: "cc75f3e3-81cc-4a54-a2e8-de839c8fac24"). InnerVolumeSpecName "kube-api-access-skqfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.218739 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.240889 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc75f3e3-81cc-4a54-a2e8-de839c8fac24-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.240919 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skqfc\" (UniqueName: \"kubernetes.io/projected/cc75f3e3-81cc-4a54-a2e8-de839c8fac24-kube-api-access-skqfc\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.333346 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-wvlhl"] Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.333625 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-wvlhl" podUID="f93d869f-9e5c-4cb6-a66f-2930752e74dc" containerName="dnsmasq-dns" containerID="cri-o://f1859b3066d40cfbfa77dd9f4a222fee767636768481d00adcfb191bdf7d7a69" gracePeriod=10 Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.584991 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e4e0df3a-5b8c-43ad-b404-5a9716f774a6","Type":"ContainerStarted","Data":"17182016bd86d61e0916a20265f1e216a103b5db4fd0669e3e35dc5bb43ad9f3"} Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.585303 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.590587 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-642kb" event={"ID":"cc75f3e3-81cc-4a54-a2e8-de839c8fac24","Type":"ContainerDied","Data":"d93ae441dbb10cae93673e8c3969b9db2b32fd991a6704544bf219a099c47803"} Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.590730 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d93ae441dbb10cae93673e8c3969b9db2b32fd991a6704544bf219a099c47803" Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.590863 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-642kb" Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.602225 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9a976894-0f59-4fb5-a297-c43c1bf88b47","Type":"ContainerStarted","Data":"0e3ac977bfefc33d15a981fe0465e7e596f0d264d9f34edb665181d1e1c1c76f"} Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.603805 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.610773 4832 generic.go:334] "Generic (PLEG): container finished" podID="f93d869f-9e5c-4cb6-a66f-2930752e74dc" containerID="f1859b3066d40cfbfa77dd9f4a222fee767636768481d00adcfb191bdf7d7a69" exitCode=0 Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.611265 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-wvlhl" event={"ID":"f93d869f-9e5c-4cb6-a66f-2930752e74dc","Type":"ContainerDied","Data":"f1859b3066d40cfbfa77dd9f4a222fee767636768481d00adcfb191bdf7d7a69"} Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.625466 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.783790042 podStartE2EDuration="57.625446744s" podCreationTimestamp="2026-01-31 04:59:36 +0000 UTC" firstStartedPulling="2026-01-31 04:59:38.469258505 +0000 UTC m=+987.418080190" lastFinishedPulling="2026-01-31 04:59:56.310915207 +0000 UTC m=+1005.259736892" observedRunningTime="2026-01-31 05:00:33.624246967 +0000 UTC m=+1042.573068652" watchObservedRunningTime="2026-01-31 05:00:33.625446744 +0000 UTC m=+1042.574268429" Jan 31 05:00:33 crc kubenswrapper[4832]: I0131 05:00:33.701107 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.814491878 podStartE2EDuration="57.701078586s" podCreationTimestamp="2026-01-31 04:59:36 +0000 UTC" firstStartedPulling="2026-01-31 04:59:38.392026864 +0000 UTC m=+987.340848549" lastFinishedPulling="2026-01-31 04:59:56.278613572 +0000 UTC m=+1005.227435257" observedRunningTime="2026-01-31 05:00:33.683285772 +0000 UTC m=+1042.632107457" watchObservedRunningTime="2026-01-31 05:00:33.701078586 +0000 UTC m=+1042.649900271" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.071230 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-m4zgg" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.160666 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72ffd399-2a10-4bc8-a99b-7b1f472193bc-operator-scripts\") pod \"72ffd399-2a10-4bc8-a99b-7b1f472193bc\" (UID: \"72ffd399-2a10-4bc8-a99b-7b1f472193bc\") " Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.160817 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9bb9\" (UniqueName: \"kubernetes.io/projected/72ffd399-2a10-4bc8-a99b-7b1f472193bc-kube-api-access-c9bb9\") pod \"72ffd399-2a10-4bc8-a99b-7b1f472193bc\" (UID: \"72ffd399-2a10-4bc8-a99b-7b1f472193bc\") " Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.163138 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72ffd399-2a10-4bc8-a99b-7b1f472193bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72ffd399-2a10-4bc8-a99b-7b1f472193bc" (UID: "72ffd399-2a10-4bc8-a99b-7b1f472193bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.170712 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72ffd399-2a10-4bc8-a99b-7b1f472193bc-kube-api-access-c9bb9" (OuterVolumeSpecName: "kube-api-access-c9bb9") pod "72ffd399-2a10-4bc8-a99b-7b1f472193bc" (UID: "72ffd399-2a10-4bc8-a99b-7b1f472193bc"). InnerVolumeSpecName "kube-api-access-c9bb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.265147 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72ffd399-2a10-4bc8-a99b-7b1f472193bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.265184 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9bb9\" (UniqueName: \"kubernetes.io/projected/72ffd399-2a10-4bc8-a99b-7b1f472193bc-kube-api-access-c9bb9\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.341174 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-167e-account-create-update-2qthj" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.346097 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4shhs" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.356988 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67f6-account-create-update-b6kd5" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.360086 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.473010 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-ovsdbserver-nb\") pod \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.473412 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zcfr\" (UniqueName: \"kubernetes.io/projected/f93d869f-9e5c-4cb6-a66f-2930752e74dc-kube-api-access-8zcfr\") pod \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.473529 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb678\" (UniqueName: \"kubernetes.io/projected/11927687-2cdb-407c-900c-6ed0a23438a9-kube-api-access-mb678\") pod \"11927687-2cdb-407c-900c-6ed0a23438a9\" (UID: \"11927687-2cdb-407c-900c-6ed0a23438a9\") " Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.473658 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-dns-svc\") pod \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.473757 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11927687-2cdb-407c-900c-6ed0a23438a9-operator-scripts\") pod \"11927687-2cdb-407c-900c-6ed0a23438a9\" (UID: \"11927687-2cdb-407c-900c-6ed0a23438a9\") " Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.473853 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-config\") pod \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.474029 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vjwb\" (UniqueName: \"kubernetes.io/projected/cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889-kube-api-access-6vjwb\") pod \"cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889\" (UID: \"cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889\") " Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.474229 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889-operator-scripts\") pod \"cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889\" (UID: \"cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889\") " Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.474331 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7cxc\" (UniqueName: \"kubernetes.io/projected/30d3f7cb-9922-4e2e-91e3-7a14015cadc2-kube-api-access-z7cxc\") pod \"30d3f7cb-9922-4e2e-91e3-7a14015cadc2\" (UID: \"30d3f7cb-9922-4e2e-91e3-7a14015cadc2\") " Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.474451 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30d3f7cb-9922-4e2e-91e3-7a14015cadc2-operator-scripts\") pod \"30d3f7cb-9922-4e2e-91e3-7a14015cadc2\" (UID: \"30d3f7cb-9922-4e2e-91e3-7a14015cadc2\") " Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.474560 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-ovsdbserver-sb\") pod \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\" (UID: \"f93d869f-9e5c-4cb6-a66f-2930752e74dc\") " Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.477394 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d3f7cb-9922-4e2e-91e3-7a14015cadc2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30d3f7cb-9922-4e2e-91e3-7a14015cadc2" (UID: "30d3f7cb-9922-4e2e-91e3-7a14015cadc2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.477615 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11927687-2cdb-407c-900c-6ed0a23438a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11927687-2cdb-407c-900c-6ed0a23438a9" (UID: "11927687-2cdb-407c-900c-6ed0a23438a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.478375 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889" (UID: "cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.486547 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889-kube-api-access-6vjwb" (OuterVolumeSpecName: "kube-api-access-6vjwb") pod "cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889" (UID: "cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889"). InnerVolumeSpecName "kube-api-access-6vjwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.487264 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d3f7cb-9922-4e2e-91e3-7a14015cadc2-kube-api-access-z7cxc" (OuterVolumeSpecName: "kube-api-access-z7cxc") pod "30d3f7cb-9922-4e2e-91e3-7a14015cadc2" (UID: "30d3f7cb-9922-4e2e-91e3-7a14015cadc2"). InnerVolumeSpecName "kube-api-access-z7cxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.487416 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11927687-2cdb-407c-900c-6ed0a23438a9-kube-api-access-mb678" (OuterVolumeSpecName: "kube-api-access-mb678") pod "11927687-2cdb-407c-900c-6ed0a23438a9" (UID: "11927687-2cdb-407c-900c-6ed0a23438a9"). InnerVolumeSpecName "kube-api-access-mb678". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.494895 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f93d869f-9e5c-4cb6-a66f-2930752e74dc-kube-api-access-8zcfr" (OuterVolumeSpecName: "kube-api-access-8zcfr") pod "f93d869f-9e5c-4cb6-a66f-2930752e74dc" (UID: "f93d869f-9e5c-4cb6-a66f-2930752e74dc"). InnerVolumeSpecName "kube-api-access-8zcfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.534337 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f93d869f-9e5c-4cb6-a66f-2930752e74dc" (UID: "f93d869f-9e5c-4cb6-a66f-2930752e74dc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.546050 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f93d869f-9e5c-4cb6-a66f-2930752e74dc" (UID: "f93d869f-9e5c-4cb6-a66f-2930752e74dc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.549271 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f93d869f-9e5c-4cb6-a66f-2930752e74dc" (UID: "f93d869f-9e5c-4cb6-a66f-2930752e74dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.560744 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-config" (OuterVolumeSpecName: "config") pod "f93d869f-9e5c-4cb6-a66f-2930752e74dc" (UID: "f93d869f-9e5c-4cb6-a66f-2930752e74dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.576390 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb678\" (UniqueName: \"kubernetes.io/projected/11927687-2cdb-407c-900c-6ed0a23438a9-kube-api-access-mb678\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.576434 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.576445 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11927687-2cdb-407c-900c-6ed0a23438a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.576455 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.576463 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vjwb\" (UniqueName: \"kubernetes.io/projected/cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889-kube-api-access-6vjwb\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.576471 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.576480 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7cxc\" (UniqueName: \"kubernetes.io/projected/30d3f7cb-9922-4e2e-91e3-7a14015cadc2-kube-api-access-z7cxc\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.576489 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30d3f7cb-9922-4e2e-91e3-7a14015cadc2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.576498 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.576506 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93d869f-9e5c-4cb6-a66f-2930752e74dc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.576514 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zcfr\" (UniqueName: \"kubernetes.io/projected/f93d869f-9e5c-4cb6-a66f-2930752e74dc-kube-api-access-8zcfr\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.624091 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-wvlhl" event={"ID":"f93d869f-9e5c-4cb6-a66f-2930752e74dc","Type":"ContainerDied","Data":"26de74dd8571140c11f0d0bd70367d1d84ba242df682fd914b576ea34f0d4267"} Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.624153 4832 scope.go:117] "RemoveContainer" containerID="f1859b3066d40cfbfa77dd9f4a222fee767636768481d00adcfb191bdf7d7a69" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.624304 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-wvlhl" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.631408 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-m4zgg" event={"ID":"72ffd399-2a10-4bc8-a99b-7b1f472193bc","Type":"ContainerDied","Data":"820b8a361e054c233b47fcc65886b42567680e6c857c66e427219480fad3cfac"} Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.631473 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="820b8a361e054c233b47fcc65886b42567680e6c857c66e427219480fad3cfac" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.631426 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-m4zgg" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.633333 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67f6-account-create-update-b6kd5" event={"ID":"11927687-2cdb-407c-900c-6ed0a23438a9","Type":"ContainerDied","Data":"e7fe174118c7ddc6dc6ad0351e06dd0318b3c0b596dccacb2b73e76122dc2033"} Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.633383 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7fe174118c7ddc6dc6ad0351e06dd0318b3c0b596dccacb2b73e76122dc2033" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.633396 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67f6-account-create-update-b6kd5" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.639261 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-167e-account-create-update-2qthj" event={"ID":"cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889","Type":"ContainerDied","Data":"7e4c77f82f34c88271ff167fd40fac5e34b6b1d14f000c4ddb53c304b78cbf65"} Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.639287 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e4c77f82f34c88271ff167fd40fac5e34b6b1d14f000c4ddb53c304b78cbf65" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.639380 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-167e-account-create-update-2qthj" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.646475 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4shhs" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.647462 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4shhs" event={"ID":"30d3f7cb-9922-4e2e-91e3-7a14015cadc2","Type":"ContainerDied","Data":"ce44420cad68ea1816efd8a5b68bb8334527dbcec4160356d9bf74c883cbf7a4"} Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.647515 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce44420cad68ea1816efd8a5b68bb8334527dbcec4160356d9bf74c883cbf7a4" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.655132 4832 scope.go:117] "RemoveContainer" containerID="80dabdef5a2ad3f7d26e56c475a16a2ec4cda49de773ec43b0a12c3487521a0c" Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.671666 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-wvlhl"] Jan 31 05:00:34 crc kubenswrapper[4832]: I0131 05:00:34.678592 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-wvlhl"] Jan 31 05:00:35 crc kubenswrapper[4832]: I0131 05:00:35.067323 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 31 05:00:35 crc kubenswrapper[4832]: I0131 05:00:35.880277 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f93d869f-9e5c-4cb6-a66f-2930752e74dc" path="/var/lib/kubelet/pods/f93d869f-9e5c-4cb6-a66f-2930752e74dc/volumes" Jan 31 05:00:37 crc kubenswrapper[4832]: E0131 05:00:37.461011 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487160a9_724e_4892_a8e6_886547709572.slice/crio-5050fc9508d1662a638b588dc05e865586d33162592a490cbd9f6bb461fec559\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487160a9_724e_4892_a8e6_886547709572.slice\": RecentStats: unable to find data in memory cache]" Jan 31 05:00:39 crc kubenswrapper[4832]: I0131 05:00:39.036332 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-642kb"] Jan 31 05:00:39 crc kubenswrapper[4832]: I0131 05:00:39.064309 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-642kb"] Jan 31 05:00:39 crc kubenswrapper[4832]: I0131 05:00:39.713589 4832 generic.go:334] "Generic (PLEG): container finished" podID="2d790d64-4815-452e-9f17-13b1b9b75c35" containerID="380d6e93d2c7c97991597565b9dfed5b2946cbd1bc5da9a5c16a999c36c033e0" exitCode=0 Jan 31 05:00:39 crc kubenswrapper[4832]: I0131 05:00:39.713705 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bwd8q" event={"ID":"2d790d64-4815-452e-9f17-13b1b9b75c35","Type":"ContainerDied","Data":"380d6e93d2c7c97991597565b9dfed5b2946cbd1bc5da9a5c16a999c36c033e0"} Jan 31 05:00:39 crc kubenswrapper[4832]: I0131 05:00:39.875621 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc75f3e3-81cc-4a54-a2e8-de839c8fac24" path="/var/lib/kubelet/pods/cc75f3e3-81cc-4a54-a2e8-de839c8fac24/volumes" Jan 31 05:00:40 crc kubenswrapper[4832]: I0131 05:00:40.088882 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:40 crc kubenswrapper[4832]: I0131 05:00:40.103478 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087-etc-swift\") pod \"swift-storage-0\" (UID: \"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087\") " pod="openstack/swift-storage-0" Jan 31 05:00:40 crc kubenswrapper[4832]: I0131 05:00:40.394902 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.655409 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8sq59" podUID="103522f1-37d5-48e1-8004-ab58b154d040" containerName="ovn-controller" probeResult="failure" output=< Jan 31 05:00:41 crc kubenswrapper[4832]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 05:00:41 crc kubenswrapper[4832]: > Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.658462 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.659692 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-nmcpt" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.891057 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8sq59-config-zltv2"] Jan 31 05:00:41 crc kubenswrapper[4832]: E0131 05:00:41.891437 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889" containerName="mariadb-account-create-update" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.891452 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889" containerName="mariadb-account-create-update" Jan 31 05:00:41 crc kubenswrapper[4832]: E0131 05:00:41.891472 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f93d869f-9e5c-4cb6-a66f-2930752e74dc" containerName="init" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.891479 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f93d869f-9e5c-4cb6-a66f-2930752e74dc" containerName="init" Jan 31 05:00:41 crc kubenswrapper[4832]: E0131 05:00:41.891488 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc75f3e3-81cc-4a54-a2e8-de839c8fac24" containerName="mariadb-account-create-update" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.891495 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc75f3e3-81cc-4a54-a2e8-de839c8fac24" containerName="mariadb-account-create-update" Jan 31 05:00:41 crc kubenswrapper[4832]: E0131 05:00:41.891526 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d3f7cb-9922-4e2e-91e3-7a14015cadc2" containerName="mariadb-database-create" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.891535 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d3f7cb-9922-4e2e-91e3-7a14015cadc2" containerName="mariadb-database-create" Jan 31 05:00:41 crc kubenswrapper[4832]: E0131 05:00:41.891546 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f93d869f-9e5c-4cb6-a66f-2930752e74dc" containerName="dnsmasq-dns" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.891557 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f93d869f-9e5c-4cb6-a66f-2930752e74dc" containerName="dnsmasq-dns" Jan 31 05:00:41 crc kubenswrapper[4832]: E0131 05:00:41.891585 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ffd399-2a10-4bc8-a99b-7b1f472193bc" containerName="mariadb-database-create" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.891592 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ffd399-2a10-4bc8-a99b-7b1f472193bc" containerName="mariadb-database-create" Jan 31 05:00:41 crc kubenswrapper[4832]: E0131 05:00:41.891606 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11927687-2cdb-407c-900c-6ed0a23438a9" containerName="mariadb-account-create-update" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.891613 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="11927687-2cdb-407c-900c-6ed0a23438a9" containerName="mariadb-account-create-update" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.891823 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc75f3e3-81cc-4a54-a2e8-de839c8fac24" containerName="mariadb-account-create-update" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.891839 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889" containerName="mariadb-account-create-update" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.891853 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="11927687-2cdb-407c-900c-6ed0a23438a9" containerName="mariadb-account-create-update" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.891864 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ffd399-2a10-4bc8-a99b-7b1f472193bc" containerName="mariadb-database-create" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.891878 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f93d869f-9e5c-4cb6-a66f-2930752e74dc" containerName="dnsmasq-dns" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.891887 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d3f7cb-9922-4e2e-91e3-7a14015cadc2" containerName="mariadb-database-create" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.892545 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.896935 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 31 05:00:41 crc kubenswrapper[4832]: I0131 05:00:41.906995 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8sq59-config-zltv2"] Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.028747 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-run-ovn\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.029084 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-run\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.029156 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78cmg\" (UniqueName: \"kubernetes.io/projected/79a497fa-abcb-449a-981b-61339f3bd3eb-kube-api-access-78cmg\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.029204 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79a497fa-abcb-449a-981b-61339f3bd3eb-scripts\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.029473 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/79a497fa-abcb-449a-981b-61339f3bd3eb-additional-scripts\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.029569 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-log-ovn\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.131476 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/79a497fa-abcb-449a-981b-61339f3bd3eb-additional-scripts\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.131578 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-log-ovn\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.131621 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-run-ovn\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.131659 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-run\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.131724 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78cmg\" (UniqueName: \"kubernetes.io/projected/79a497fa-abcb-449a-981b-61339f3bd3eb-kube-api-access-78cmg\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.131779 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79a497fa-abcb-449a-981b-61339f3bd3eb-scripts\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.132070 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-log-ovn\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.132182 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-run\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.132380 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-run-ovn\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.133510 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/79a497fa-abcb-449a-981b-61339f3bd3eb-additional-scripts\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.135895 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79a497fa-abcb-449a-981b-61339f3bd3eb-scripts\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.161762 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78cmg\" (UniqueName: \"kubernetes.io/projected/79a497fa-abcb-449a-981b-61339f3bd3eb-kube-api-access-78cmg\") pod \"ovn-controller-8sq59-config-zltv2\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:42 crc kubenswrapper[4832]: I0131 05:00:42.222955 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:44 crc kubenswrapper[4832]: I0131 05:00:44.056016 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qbknh"] Jan 31 05:00:44 crc kubenswrapper[4832]: I0131 05:00:44.059657 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qbknh" Jan 31 05:00:44 crc kubenswrapper[4832]: I0131 05:00:44.062754 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 31 05:00:44 crc kubenswrapper[4832]: I0131 05:00:44.070863 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qbknh"] Jan 31 05:00:44 crc kubenswrapper[4832]: I0131 05:00:44.166645 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b042eb-c890-4f83-96c3-a1f2fbd6d712-operator-scripts\") pod \"root-account-create-update-qbknh\" (UID: \"d9b042eb-c890-4f83-96c3-a1f2fbd6d712\") " pod="openstack/root-account-create-update-qbknh" Jan 31 05:00:44 crc kubenswrapper[4832]: I0131 05:00:44.166706 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9qm2\" (UniqueName: \"kubernetes.io/projected/d9b042eb-c890-4f83-96c3-a1f2fbd6d712-kube-api-access-l9qm2\") pod \"root-account-create-update-qbknh\" (UID: \"d9b042eb-c890-4f83-96c3-a1f2fbd6d712\") " pod="openstack/root-account-create-update-qbknh" Jan 31 05:00:44 crc kubenswrapper[4832]: I0131 05:00:44.281630 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b042eb-c890-4f83-96c3-a1f2fbd6d712-operator-scripts\") pod \"root-account-create-update-qbknh\" (UID: \"d9b042eb-c890-4f83-96c3-a1f2fbd6d712\") " pod="openstack/root-account-create-update-qbknh" Jan 31 05:00:44 crc kubenswrapper[4832]: I0131 05:00:44.281748 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9qm2\" (UniqueName: \"kubernetes.io/projected/d9b042eb-c890-4f83-96c3-a1f2fbd6d712-kube-api-access-l9qm2\") pod \"root-account-create-update-qbknh\" (UID: \"d9b042eb-c890-4f83-96c3-a1f2fbd6d712\") " pod="openstack/root-account-create-update-qbknh" Jan 31 05:00:44 crc kubenswrapper[4832]: I0131 05:00:44.282955 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b042eb-c890-4f83-96c3-a1f2fbd6d712-operator-scripts\") pod \"root-account-create-update-qbknh\" (UID: \"d9b042eb-c890-4f83-96c3-a1f2fbd6d712\") " pod="openstack/root-account-create-update-qbknh" Jan 31 05:00:44 crc kubenswrapper[4832]: I0131 05:00:44.346083 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9qm2\" (UniqueName: \"kubernetes.io/projected/d9b042eb-c890-4f83-96c3-a1f2fbd6d712-kube-api-access-l9qm2\") pod \"root-account-create-update-qbknh\" (UID: \"d9b042eb-c890-4f83-96c3-a1f2fbd6d712\") " pod="openstack/root-account-create-update-qbknh" Jan 31 05:00:44 crc kubenswrapper[4832]: I0131 05:00:44.439909 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qbknh" Jan 31 05:00:46 crc kubenswrapper[4832]: I0131 05:00:46.615113 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8sq59" podUID="103522f1-37d5-48e1-8004-ab58b154d040" containerName="ovn-controller" probeResult="failure" output=< Jan 31 05:00:46 crc kubenswrapper[4832]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 05:00:46 crc kubenswrapper[4832]: > Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.365027 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.443335 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-swiftconf\") pod \"2d790d64-4815-452e-9f17-13b1b9b75c35\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.443432 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2d790d64-4815-452e-9f17-13b1b9b75c35-etc-swift\") pod \"2d790d64-4815-452e-9f17-13b1b9b75c35\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.443459 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvv5l\" (UniqueName: \"kubernetes.io/projected/2d790d64-4815-452e-9f17-13b1b9b75c35-kube-api-access-pvv5l\") pod \"2d790d64-4815-452e-9f17-13b1b9b75c35\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.443556 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-dispersionconf\") pod \"2d790d64-4815-452e-9f17-13b1b9b75c35\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.443727 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d790d64-4815-452e-9f17-13b1b9b75c35-scripts\") pod \"2d790d64-4815-452e-9f17-13b1b9b75c35\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.443782 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-combined-ca-bundle\") pod \"2d790d64-4815-452e-9f17-13b1b9b75c35\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.443822 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2d790d64-4815-452e-9f17-13b1b9b75c35-ring-data-devices\") pod \"2d790d64-4815-452e-9f17-13b1b9b75c35\" (UID: \"2d790d64-4815-452e-9f17-13b1b9b75c35\") " Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.445175 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d790d64-4815-452e-9f17-13b1b9b75c35-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2d790d64-4815-452e-9f17-13b1b9b75c35" (UID: "2d790d64-4815-452e-9f17-13b1b9b75c35"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.449366 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d790d64-4815-452e-9f17-13b1b9b75c35-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2d790d64-4815-452e-9f17-13b1b9b75c35" (UID: "2d790d64-4815-452e-9f17-13b1b9b75c35"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.459091 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2d790d64-4815-452e-9f17-13b1b9b75c35" (UID: "2d790d64-4815-452e-9f17-13b1b9b75c35"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.478883 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d790d64-4815-452e-9f17-13b1b9b75c35-kube-api-access-pvv5l" (OuterVolumeSpecName: "kube-api-access-pvv5l") pod "2d790d64-4815-452e-9f17-13b1b9b75c35" (UID: "2d790d64-4815-452e-9f17-13b1b9b75c35"). InnerVolumeSpecName "kube-api-access-pvv5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.481113 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d790d64-4815-452e-9f17-13b1b9b75c35" (UID: "2d790d64-4815-452e-9f17-13b1b9b75c35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.495447 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d790d64-4815-452e-9f17-13b1b9b75c35-scripts" (OuterVolumeSpecName: "scripts") pod "2d790d64-4815-452e-9f17-13b1b9b75c35" (UID: "2d790d64-4815-452e-9f17-13b1b9b75c35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.506140 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2d790d64-4815-452e-9f17-13b1b9b75c35" (UID: "2d790d64-4815-452e-9f17-13b1b9b75c35"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.545218 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d790d64-4815-452e-9f17-13b1b9b75c35-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.545253 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.545268 4832 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2d790d64-4815-452e-9f17-13b1b9b75c35-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.545281 4832 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.545292 4832 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2d790d64-4815-452e-9f17-13b1b9b75c35-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.545305 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvv5l\" (UniqueName: \"kubernetes.io/projected/2d790d64-4815-452e-9f17-13b1b9b75c35-kube-api-access-pvv5l\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.545316 4832 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2d790d64-4815-452e-9f17-13b1b9b75c35-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:47 crc kubenswrapper[4832]: E0131 05:00:47.670787 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487160a9_724e_4892_a8e6_886547709572.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487160a9_724e_4892_a8e6_886547709572.slice/crio-5050fc9508d1662a638b588dc05e865586d33162592a490cbd9f6bb461fec559\": RecentStats: unable to find data in memory cache]" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.746833 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.814535 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bwd8q" event={"ID":"2d790d64-4815-452e-9f17-13b1b9b75c35","Type":"ContainerDied","Data":"aed8111e2e2978624710840de73dfcfd58cc245548beeb2f81ff4374e8ed5f28"} Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.814659 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aed8111e2e2978624710840de73dfcfd58cc245548beeb2f81ff4374e8ed5f28" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.814625 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bwd8q" Jan 31 05:00:47 crc kubenswrapper[4832]: I0131 05:00:47.906306 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8sq59-config-zltv2"] Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.002085 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qbknh"] Jan 31 05:00:48 crc kubenswrapper[4832]: W0131 05:00:48.012169 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9b042eb_c890_4f83_96c3_a1f2fbd6d712.slice/crio-016e09da1e744e4ac9f5f5e3cdbfc5bc4347262b2e1c5cbf7eb81bbdd0de2732 WatchSource:0}: Error finding container 016e09da1e744e4ac9f5f5e3cdbfc5bc4347262b2e1c5cbf7eb81bbdd0de2732: Status 404 returned error can't find the container with id 016e09da1e744e4ac9f5f5e3cdbfc5bc4347262b2e1c5cbf7eb81bbdd0de2732 Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.051071 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-m2t2g"] Jan 31 05:00:48 crc kubenswrapper[4832]: E0131 05:00:48.051432 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d790d64-4815-452e-9f17-13b1b9b75c35" containerName="swift-ring-rebalance" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.051444 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d790d64-4815-452e-9f17-13b1b9b75c35" containerName="swift-ring-rebalance" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.051629 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d790d64-4815-452e-9f17-13b1b9b75c35" containerName="swift-ring-rebalance" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.052246 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m2t2g" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.062741 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-m2t2g"] Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.109589 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.165376 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b8bcc6-a968-4bfc-ba6c-7431d3f41deb-operator-scripts\") pod \"cinder-db-create-m2t2g\" (UID: \"25b8bcc6-a968-4bfc-ba6c-7431d3f41deb\") " pod="openstack/cinder-db-create-m2t2g" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.165489 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lghhf\" (UniqueName: \"kubernetes.io/projected/25b8bcc6-a968-4bfc-ba6c-7431d3f41deb-kube-api-access-lghhf\") pod \"cinder-db-create-m2t2g\" (UID: \"25b8bcc6-a968-4bfc-ba6c-7431d3f41deb\") " pod="openstack/cinder-db-create-m2t2g" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.169353 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-25g4q"] Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.170413 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-25g4q" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.215040 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-25g4q"] Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.269101 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lghhf\" (UniqueName: \"kubernetes.io/projected/25b8bcc6-a968-4bfc-ba6c-7431d3f41deb-kube-api-access-lghhf\") pod \"cinder-db-create-m2t2g\" (UID: \"25b8bcc6-a968-4bfc-ba6c-7431d3f41deb\") " pod="openstack/cinder-db-create-m2t2g" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.269229 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b8bcc6-a968-4bfc-ba6c-7431d3f41deb-operator-scripts\") pod \"cinder-db-create-m2t2g\" (UID: \"25b8bcc6-a968-4bfc-ba6c-7431d3f41deb\") " pod="openstack/cinder-db-create-m2t2g" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.272989 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b8bcc6-a968-4bfc-ba6c-7431d3f41deb-operator-scripts\") pod \"cinder-db-create-m2t2g\" (UID: \"25b8bcc6-a968-4bfc-ba6c-7431d3f41deb\") " pod="openstack/cinder-db-create-m2t2g" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.280210 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.316237 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lghhf\" (UniqueName: \"kubernetes.io/projected/25b8bcc6-a968-4bfc-ba6c-7431d3f41deb-kube-api-access-lghhf\") pod \"cinder-db-create-m2t2g\" (UID: \"25b8bcc6-a968-4bfc-ba6c-7431d3f41deb\") " pod="openstack/cinder-db-create-m2t2g" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.378186 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m2t2g" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.386274 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-1920-account-create-update-kjz5k"] Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.389344 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1920-account-create-update-kjz5k" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.402630 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17737fb8-509f-41f5-ba32-e7079f79839c-operator-scripts\") pod \"barbican-db-create-25g4q\" (UID: \"17737fb8-509f-41f5-ba32-e7079f79839c\") " pod="openstack/barbican-db-create-25g4q" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.402761 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4vml\" (UniqueName: \"kubernetes.io/projected/17737fb8-509f-41f5-ba32-e7079f79839c-kube-api-access-r4vml\") pod \"barbican-db-create-25g4q\" (UID: \"17737fb8-509f-41f5-ba32-e7079f79839c\") " pod="openstack/barbican-db-create-25g4q" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.406933 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.464129 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1920-account-create-update-kjz5k"] Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.504625 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4vml\" (UniqueName: \"kubernetes.io/projected/17737fb8-509f-41f5-ba32-e7079f79839c-kube-api-access-r4vml\") pod \"barbican-db-create-25g4q\" (UID: \"17737fb8-509f-41f5-ba32-e7079f79839c\") " pod="openstack/barbican-db-create-25g4q" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.504688 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg4kn\" (UniqueName: \"kubernetes.io/projected/595cc66d-63ae-4fe8-bf8a-457a6dd22bdf-kube-api-access-bg4kn\") pod \"barbican-1920-account-create-update-kjz5k\" (UID: \"595cc66d-63ae-4fe8-bf8a-457a6dd22bdf\") " pod="openstack/barbican-1920-account-create-update-kjz5k" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.504784 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17737fb8-509f-41f5-ba32-e7079f79839c-operator-scripts\") pod \"barbican-db-create-25g4q\" (UID: \"17737fb8-509f-41f5-ba32-e7079f79839c\") " pod="openstack/barbican-db-create-25g4q" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.504811 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/595cc66d-63ae-4fe8-bf8a-457a6dd22bdf-operator-scripts\") pod \"barbican-1920-account-create-update-kjz5k\" (UID: \"595cc66d-63ae-4fe8-bf8a-457a6dd22bdf\") " pod="openstack/barbican-1920-account-create-update-kjz5k" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.506627 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17737fb8-509f-41f5-ba32-e7079f79839c-operator-scripts\") pod \"barbican-db-create-25g4q\" (UID: \"17737fb8-509f-41f5-ba32-e7079f79839c\") " pod="openstack/barbican-db-create-25g4q" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.506986 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a5be-account-create-update-r78dv"] Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.508036 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a5be-account-create-update-r78dv" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.528242 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.552858 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4vml\" (UniqueName: \"kubernetes.io/projected/17737fb8-509f-41f5-ba32-e7079f79839c-kube-api-access-r4vml\") pod \"barbican-db-create-25g4q\" (UID: \"17737fb8-509f-41f5-ba32-e7079f79839c\") " pod="openstack/barbican-db-create-25g4q" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.558178 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a5be-account-create-update-r78dv"] Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.607988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7qrz\" (UniqueName: \"kubernetes.io/projected/2f6fea4b-c60e-4068-9814-50caf6f127aa-kube-api-access-k7qrz\") pod \"cinder-a5be-account-create-update-r78dv\" (UID: \"2f6fea4b-c60e-4068-9814-50caf6f127aa\") " pod="openstack/cinder-a5be-account-create-update-r78dv" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.608319 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/595cc66d-63ae-4fe8-bf8a-457a6dd22bdf-operator-scripts\") pod \"barbican-1920-account-create-update-kjz5k\" (UID: \"595cc66d-63ae-4fe8-bf8a-457a6dd22bdf\") " pod="openstack/barbican-1920-account-create-update-kjz5k" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.608421 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg4kn\" (UniqueName: \"kubernetes.io/projected/595cc66d-63ae-4fe8-bf8a-457a6dd22bdf-kube-api-access-bg4kn\") pod \"barbican-1920-account-create-update-kjz5k\" (UID: \"595cc66d-63ae-4fe8-bf8a-457a6dd22bdf\") " pod="openstack/barbican-1920-account-create-update-kjz5k" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.608512 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f6fea4b-c60e-4068-9814-50caf6f127aa-operator-scripts\") pod \"cinder-a5be-account-create-update-r78dv\" (UID: \"2f6fea4b-c60e-4068-9814-50caf6f127aa\") " pod="openstack/cinder-a5be-account-create-update-r78dv" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.609473 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/595cc66d-63ae-4fe8-bf8a-457a6dd22bdf-operator-scripts\") pod \"barbican-1920-account-create-update-kjz5k\" (UID: \"595cc66d-63ae-4fe8-bf8a-457a6dd22bdf\") " pod="openstack/barbican-1920-account-create-update-kjz5k" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.656365 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg4kn\" (UniqueName: \"kubernetes.io/projected/595cc66d-63ae-4fe8-bf8a-457a6dd22bdf-kube-api-access-bg4kn\") pod \"barbican-1920-account-create-update-kjz5k\" (UID: \"595cc66d-63ae-4fe8-bf8a-457a6dd22bdf\") " pod="openstack/barbican-1920-account-create-update-kjz5k" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.677618 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-t6rff"] Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.686052 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t6rff" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.705202 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-t6rff"] Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.717732 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7qrz\" (UniqueName: \"kubernetes.io/projected/2f6fea4b-c60e-4068-9814-50caf6f127aa-kube-api-access-k7qrz\") pod \"cinder-a5be-account-create-update-r78dv\" (UID: \"2f6fea4b-c60e-4068-9814-50caf6f127aa\") " pod="openstack/cinder-a5be-account-create-update-r78dv" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.717833 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbhsg\" (UniqueName: \"kubernetes.io/projected/e8216556-80f1-47a9-8a8c-d01c57f8dd71-kube-api-access-jbhsg\") pod \"neutron-db-create-t6rff\" (UID: \"e8216556-80f1-47a9-8a8c-d01c57f8dd71\") " pod="openstack/neutron-db-create-t6rff" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.717883 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f6fea4b-c60e-4068-9814-50caf6f127aa-operator-scripts\") pod \"cinder-a5be-account-create-update-r78dv\" (UID: \"2f6fea4b-c60e-4068-9814-50caf6f127aa\") " pod="openstack/cinder-a5be-account-create-update-r78dv" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.717912 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8216556-80f1-47a9-8a8c-d01c57f8dd71-operator-scripts\") pod \"neutron-db-create-t6rff\" (UID: \"e8216556-80f1-47a9-8a8c-d01c57f8dd71\") " pod="openstack/neutron-db-create-t6rff" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.719035 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f6fea4b-c60e-4068-9814-50caf6f127aa-operator-scripts\") pod \"cinder-a5be-account-create-update-r78dv\" (UID: \"2f6fea4b-c60e-4068-9814-50caf6f127aa\") " pod="openstack/cinder-a5be-account-create-update-r78dv" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.760273 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7qrz\" (UniqueName: \"kubernetes.io/projected/2f6fea4b-c60e-4068-9814-50caf6f127aa-kube-api-access-k7qrz\") pod \"cinder-a5be-account-create-update-r78dv\" (UID: \"2f6fea4b-c60e-4068-9814-50caf6f127aa\") " pod="openstack/cinder-a5be-account-create-update-r78dv" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.798624 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-wrfck"] Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.799790 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wrfck" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.805308 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-25g4q" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.806242 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.806616 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.806906 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7z4jm" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.814306 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.819874 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6dbe3e-1852-493a-926a-95d85495da09-combined-ca-bundle\") pod \"keystone-db-sync-wrfck\" (UID: \"4d6dbe3e-1852-493a-926a-95d85495da09\") " pod="openstack/keystone-db-sync-wrfck" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.819930 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d6dbe3e-1852-493a-926a-95d85495da09-config-data\") pod \"keystone-db-sync-wrfck\" (UID: \"4d6dbe3e-1852-493a-926a-95d85495da09\") " pod="openstack/keystone-db-sync-wrfck" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.819968 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8216556-80f1-47a9-8a8c-d01c57f8dd71-operator-scripts\") pod \"neutron-db-create-t6rff\" (UID: \"e8216556-80f1-47a9-8a8c-d01c57f8dd71\") " pod="openstack/neutron-db-create-t6rff" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.820043 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dg94\" (UniqueName: \"kubernetes.io/projected/4d6dbe3e-1852-493a-926a-95d85495da09-kube-api-access-6dg94\") pod \"keystone-db-sync-wrfck\" (UID: \"4d6dbe3e-1852-493a-926a-95d85495da09\") " pod="openstack/keystone-db-sync-wrfck" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.820085 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbhsg\" (UniqueName: \"kubernetes.io/projected/e8216556-80f1-47a9-8a8c-d01c57f8dd71-kube-api-access-jbhsg\") pod \"neutron-db-create-t6rff\" (UID: \"e8216556-80f1-47a9-8a8c-d01c57f8dd71\") " pod="openstack/neutron-db-create-t6rff" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.820603 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wrfck"] Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.824864 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8216556-80f1-47a9-8a8c-d01c57f8dd71-operator-scripts\") pod \"neutron-db-create-t6rff\" (UID: \"e8216556-80f1-47a9-8a8c-d01c57f8dd71\") " pod="openstack/neutron-db-create-t6rff" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.861190 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-92w4s" event={"ID":"ffef7600-94e5-444a-be7e-215a512c0233","Type":"ContainerStarted","Data":"bb0f0186b50c9eca7cc4e7874b8dd3d3df80f652db5e3cfed202595e39092956"} Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.869581 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qbknh" event={"ID":"d9b042eb-c890-4f83-96c3-a1f2fbd6d712","Type":"ContainerStarted","Data":"016e09da1e744e4ac9f5f5e3cdbfc5bc4347262b2e1c5cbf7eb81bbdd0de2732"} Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.870498 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"f687d748b57704e747f60ab7bd0ac203b34fa803fc586dfa715683b08e7073f1"} Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.871462 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8sq59-config-zltv2" event={"ID":"79a497fa-abcb-449a-981b-61339f3bd3eb","Type":"ContainerStarted","Data":"ce1f260c1bacf2a3bc2ab931116187b12b28857b36e62338ff9977cb9da41632"} Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.883678 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbhsg\" (UniqueName: \"kubernetes.io/projected/e8216556-80f1-47a9-8a8c-d01c57f8dd71-kube-api-access-jbhsg\") pod \"neutron-db-create-t6rff\" (UID: \"e8216556-80f1-47a9-8a8c-d01c57f8dd71\") " pod="openstack/neutron-db-create-t6rff" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.894152 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1920-account-create-update-kjz5k" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.900406 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2130-account-create-update-hsf5h"] Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.901598 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2130-account-create-update-hsf5h" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.909344 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-92w4s" podStartSLOduration=2.832217303 podStartE2EDuration="17.909320434s" podCreationTimestamp="2026-01-31 05:00:31 +0000 UTC" firstStartedPulling="2026-01-31 05:00:32.425978253 +0000 UTC m=+1041.374799938" lastFinishedPulling="2026-01-31 05:00:47.503081384 +0000 UTC m=+1056.451903069" observedRunningTime="2026-01-31 05:00:48.900851591 +0000 UTC m=+1057.849673286" watchObservedRunningTime="2026-01-31 05:00:48.909320434 +0000 UTC m=+1057.858142119" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.923006 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dg94\" (UniqueName: \"kubernetes.io/projected/4d6dbe3e-1852-493a-926a-95d85495da09-kube-api-access-6dg94\") pod \"keystone-db-sync-wrfck\" (UID: \"4d6dbe3e-1852-493a-926a-95d85495da09\") " pod="openstack/keystone-db-sync-wrfck" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.923365 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac9656f-3e3a-486c-a90e-94162a824223-operator-scripts\") pod \"neutron-2130-account-create-update-hsf5h\" (UID: \"dac9656f-3e3a-486c-a90e-94162a824223\") " pod="openstack/neutron-2130-account-create-update-hsf5h" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.923649 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6dbe3e-1852-493a-926a-95d85495da09-combined-ca-bundle\") pod \"keystone-db-sync-wrfck\" (UID: \"4d6dbe3e-1852-493a-926a-95d85495da09\") " pod="openstack/keystone-db-sync-wrfck" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.923816 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d6dbe3e-1852-493a-926a-95d85495da09-config-data\") pod \"keystone-db-sync-wrfck\" (UID: \"4d6dbe3e-1852-493a-926a-95d85495da09\") " pod="openstack/keystone-db-sync-wrfck" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.923912 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l796t\" (UniqueName: \"kubernetes.io/projected/dac9656f-3e3a-486c-a90e-94162a824223-kube-api-access-l796t\") pod \"neutron-2130-account-create-update-hsf5h\" (UID: \"dac9656f-3e3a-486c-a90e-94162a824223\") " pod="openstack/neutron-2130-account-create-update-hsf5h" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.935196 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d6dbe3e-1852-493a-926a-95d85495da09-config-data\") pod \"keystone-db-sync-wrfck\" (UID: \"4d6dbe3e-1852-493a-926a-95d85495da09\") " pod="openstack/keystone-db-sync-wrfck" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.937438 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.938511 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a5be-account-create-update-r78dv" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.951166 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2130-account-create-update-hsf5h"] Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.956724 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6dbe3e-1852-493a-926a-95d85495da09-combined-ca-bundle\") pod \"keystone-db-sync-wrfck\" (UID: \"4d6dbe3e-1852-493a-926a-95d85495da09\") " pod="openstack/keystone-db-sync-wrfck" Jan 31 05:00:48 crc kubenswrapper[4832]: I0131 05:00:48.985884 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dg94\" (UniqueName: \"kubernetes.io/projected/4d6dbe3e-1852-493a-926a-95d85495da09-kube-api-access-6dg94\") pod \"keystone-db-sync-wrfck\" (UID: \"4d6dbe3e-1852-493a-926a-95d85495da09\") " pod="openstack/keystone-db-sync-wrfck" Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.027011 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac9656f-3e3a-486c-a90e-94162a824223-operator-scripts\") pod \"neutron-2130-account-create-update-hsf5h\" (UID: \"dac9656f-3e3a-486c-a90e-94162a824223\") " pod="openstack/neutron-2130-account-create-update-hsf5h" Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.027141 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l796t\" (UniqueName: \"kubernetes.io/projected/dac9656f-3e3a-486c-a90e-94162a824223-kube-api-access-l796t\") pod \"neutron-2130-account-create-update-hsf5h\" (UID: \"dac9656f-3e3a-486c-a90e-94162a824223\") " pod="openstack/neutron-2130-account-create-update-hsf5h" Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.027872 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac9656f-3e3a-486c-a90e-94162a824223-operator-scripts\") pod \"neutron-2130-account-create-update-hsf5h\" (UID: \"dac9656f-3e3a-486c-a90e-94162a824223\") " pod="openstack/neutron-2130-account-create-update-hsf5h" Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.048267 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l796t\" (UniqueName: \"kubernetes.io/projected/dac9656f-3e3a-486c-a90e-94162a824223-kube-api-access-l796t\") pod \"neutron-2130-account-create-update-hsf5h\" (UID: \"dac9656f-3e3a-486c-a90e-94162a824223\") " pod="openstack/neutron-2130-account-create-update-hsf5h" Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.060827 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t6rff" Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.141934 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wrfck" Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.278448 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2130-account-create-update-hsf5h" Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.403065 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-m2t2g"] Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.537180 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-25g4q"] Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.732019 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a5be-account-create-update-r78dv"] Jan 31 05:00:49 crc kubenswrapper[4832]: W0131 05:00:49.739485 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f6fea4b_c60e_4068_9814_50caf6f127aa.slice/crio-920809ae62cc7b660c46dcfeb51c14874a5f0068d4964ec9048ddb473a65a124 WatchSource:0}: Error finding container 920809ae62cc7b660c46dcfeb51c14874a5f0068d4964ec9048ddb473a65a124: Status 404 returned error can't find the container with id 920809ae62cc7b660c46dcfeb51c14874a5f0068d4964ec9048ddb473a65a124 Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.829308 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1920-account-create-update-kjz5k"] Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.901101 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m2t2g" event={"ID":"25b8bcc6-a968-4bfc-ba6c-7431d3f41deb","Type":"ContainerStarted","Data":"08d02cf98434fbaf3c5caf2bda22cee72b0f35a78df7ff1dc3e504ec0a3cce31"} Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.903458 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qbknh" event={"ID":"d9b042eb-c890-4f83-96c3-a1f2fbd6d712","Type":"ContainerStarted","Data":"a5493cc636deef6707a12f3b7e5907e8e15978c00f318e2d7df3142bbe7d8a2b"} Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.907058 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1920-account-create-update-kjz5k" event={"ID":"595cc66d-63ae-4fe8-bf8a-457a6dd22bdf","Type":"ContainerStarted","Data":"180a2114b466266b50deede026f0914d1b7de0ee00d5f280839c9fda56504e43"} Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.914490 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8sq59-config-zltv2" event={"ID":"79a497fa-abcb-449a-981b-61339f3bd3eb","Type":"ContainerStarted","Data":"eec549683e1c37d759a251067f9759dad3d26d9c9615de12f755086f9566344f"} Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.921700 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-25g4q" event={"ID":"17737fb8-509f-41f5-ba32-e7079f79839c","Type":"ContainerStarted","Data":"9e0e81086e233a19595d8584f86daea7843cf62b41abed7e015315a5664208f9"} Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.929291 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a5be-account-create-update-r78dv" event={"ID":"2f6fea4b-c60e-4068-9814-50caf6f127aa","Type":"ContainerStarted","Data":"920809ae62cc7b660c46dcfeb51c14874a5f0068d4964ec9048ddb473a65a124"} Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.942013 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-qbknh" podStartSLOduration=5.941985497 podStartE2EDuration="5.941985497s" podCreationTimestamp="2026-01-31 05:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:00:49.932921996 +0000 UTC m=+1058.881743681" watchObservedRunningTime="2026-01-31 05:00:49.941985497 +0000 UTC m=+1058.890807182" Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.961579 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-t6rff"] Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.965829 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-8sq59-config-zltv2" podStartSLOduration=8.965799408 podStartE2EDuration="8.965799408s" podCreationTimestamp="2026-01-31 05:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:00:49.961527715 +0000 UTC m=+1058.910349420" watchObservedRunningTime="2026-01-31 05:00:49.965799408 +0000 UTC m=+1058.914621093" Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.987444 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2130-account-create-update-hsf5h"] Jan 31 05:00:49 crc kubenswrapper[4832]: I0131 05:00:49.994260 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wrfck"] Jan 31 05:00:49 crc kubenswrapper[4832]: W0131 05:00:49.998153 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddac9656f_3e3a_486c_a90e_94162a824223.slice/crio-2734d428d005ee0d2c48ec5ed539ba74414638930efa862c42720a3c910978e5 WatchSource:0}: Error finding container 2734d428d005ee0d2c48ec5ed539ba74414638930efa862c42720a3c910978e5: Status 404 returned error can't find the container with id 2734d428d005ee0d2c48ec5ed539ba74414638930efa862c42720a3c910978e5 Jan 31 05:00:50 crc kubenswrapper[4832]: I0131 05:00:50.944073 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1920-account-create-update-kjz5k" event={"ID":"595cc66d-63ae-4fe8-bf8a-457a6dd22bdf","Type":"ContainerStarted","Data":"0eb48ae8fc75cfa70b4a70909ba66ff101a56e28766e4650ea85ff0c0d0018f3"} Jan 31 05:00:50 crc kubenswrapper[4832]: I0131 05:00:50.947600 4832 generic.go:334] "Generic (PLEG): container finished" podID="79a497fa-abcb-449a-981b-61339f3bd3eb" containerID="eec549683e1c37d759a251067f9759dad3d26d9c9615de12f755086f9566344f" exitCode=0 Jan 31 05:00:50 crc kubenswrapper[4832]: I0131 05:00:50.947684 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8sq59-config-zltv2" event={"ID":"79a497fa-abcb-449a-981b-61339f3bd3eb","Type":"ContainerDied","Data":"eec549683e1c37d759a251067f9759dad3d26d9c9615de12f755086f9566344f"} Jan 31 05:00:50 crc kubenswrapper[4832]: I0131 05:00:50.950048 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wrfck" event={"ID":"4d6dbe3e-1852-493a-926a-95d85495da09","Type":"ContainerStarted","Data":"f1412f4986f3ddca552e2c93148d60103ee1f5678860c0664f2ee9e9e10cffb9"} Jan 31 05:00:50 crc kubenswrapper[4832]: I0131 05:00:50.952500 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-25g4q" event={"ID":"17737fb8-509f-41f5-ba32-e7079f79839c","Type":"ContainerStarted","Data":"0627e10c9e962a1dc681a4ca9b3c1bedc80be1ead41b0d7c81e4b67e50bd6728"} Jan 31 05:00:50 crc kubenswrapper[4832]: I0131 05:00:50.957176 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a5be-account-create-update-r78dv" event={"ID":"2f6fea4b-c60e-4068-9814-50caf6f127aa","Type":"ContainerStarted","Data":"494c402f53aab4347a291134569f5319b7c69ad36ad5a3ede27303b93b38cc1c"} Jan 31 05:00:50 crc kubenswrapper[4832]: I0131 05:00:50.960947 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m2t2g" event={"ID":"25b8bcc6-a968-4bfc-ba6c-7431d3f41deb","Type":"ContainerStarted","Data":"520cceb62a925e47641f0ea1f6b1a784ebe340c38e4c4d88f6b0fbeeec04936d"} Jan 31 05:00:50 crc kubenswrapper[4832]: I0131 05:00:50.962963 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2130-account-create-update-hsf5h" event={"ID":"dac9656f-3e3a-486c-a90e-94162a824223","Type":"ContainerStarted","Data":"0c85861e44b44826382484c91f4f82ed13a1eb4bdd6030f2582e4d436b533947"} Jan 31 05:00:50 crc kubenswrapper[4832]: I0131 05:00:50.962989 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2130-account-create-update-hsf5h" event={"ID":"dac9656f-3e3a-486c-a90e-94162a824223","Type":"ContainerStarted","Data":"2734d428d005ee0d2c48ec5ed539ba74414638930efa862c42720a3c910978e5"} Jan 31 05:00:50 crc kubenswrapper[4832]: I0131 05:00:50.965781 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t6rff" event={"ID":"e8216556-80f1-47a9-8a8c-d01c57f8dd71","Type":"ContainerStarted","Data":"090005c53702850a249d5b313f6fe6741e475ae68e85871f129d47bafdd137da"} Jan 31 05:00:50 crc kubenswrapper[4832]: I0131 05:00:50.965806 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t6rff" event={"ID":"e8216556-80f1-47a9-8a8c-d01c57f8dd71","Type":"ContainerStarted","Data":"60689da87a23a03bbe2c951a8714b4fbd0f4f11db4e7533d061efdb04d43c61d"} Jan 31 05:00:50 crc kubenswrapper[4832]: I0131 05:00:50.974809 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-1920-account-create-update-kjz5k" podStartSLOduration=2.974780815 podStartE2EDuration="2.974780815s" podCreationTimestamp="2026-01-31 05:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:00:50.959752598 +0000 UTC m=+1059.908574283" watchObservedRunningTime="2026-01-31 05:00:50.974780815 +0000 UTC m=+1059.923602500" Jan 31 05:00:50 crc kubenswrapper[4832]: I0131 05:00:50.985861 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-a5be-account-create-update-r78dv" podStartSLOduration=2.985838489 podStartE2EDuration="2.985838489s" podCreationTimestamp="2026-01-31 05:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:00:50.981035649 +0000 UTC m=+1059.929857334" watchObservedRunningTime="2026-01-31 05:00:50.985838489 +0000 UTC m=+1059.934660174" Jan 31 05:00:51 crc kubenswrapper[4832]: I0131 05:00:51.027929 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-25g4q" podStartSLOduration=3.027903877 podStartE2EDuration="3.027903877s" podCreationTimestamp="2026-01-31 05:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:00:51.023313444 +0000 UTC m=+1059.972135129" watchObservedRunningTime="2026-01-31 05:00:51.027903877 +0000 UTC m=+1059.976725582" Jan 31 05:00:51 crc kubenswrapper[4832]: I0131 05:00:51.048093 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-2130-account-create-update-hsf5h" podStartSLOduration=3.048072274 podStartE2EDuration="3.048072274s" podCreationTimestamp="2026-01-31 05:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:00:51.042354206 +0000 UTC m=+1059.991175901" watchObservedRunningTime="2026-01-31 05:00:51.048072274 +0000 UTC m=+1059.996893959" Jan 31 05:00:51 crc kubenswrapper[4832]: I0131 05:00:51.095417 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-t6rff" podStartSLOduration=3.095395065 podStartE2EDuration="3.095395065s" podCreationTimestamp="2026-01-31 05:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:00:51.067775217 +0000 UTC m=+1060.016596902" watchObservedRunningTime="2026-01-31 05:00:51.095395065 +0000 UTC m=+1060.044216750" Jan 31 05:00:51 crc kubenswrapper[4832]: I0131 05:00:51.098042 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-m2t2g" podStartSLOduration=3.0980290679999998 podStartE2EDuration="3.098029068s" podCreationTimestamp="2026-01-31 05:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:00:51.085379024 +0000 UTC m=+1060.034200729" watchObservedRunningTime="2026-01-31 05:00:51.098029068 +0000 UTC m=+1060.046850753" Jan 31 05:00:51 crc kubenswrapper[4832]: I0131 05:00:51.615042 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-8sq59" Jan 31 05:00:51 crc kubenswrapper[4832]: I0131 05:00:51.991042 4832 generic.go:334] "Generic (PLEG): container finished" podID="e8216556-80f1-47a9-8a8c-d01c57f8dd71" containerID="090005c53702850a249d5b313f6fe6741e475ae68e85871f129d47bafdd137da" exitCode=0 Jan 31 05:00:51 crc kubenswrapper[4832]: I0131 05:00:51.991137 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t6rff" event={"ID":"e8216556-80f1-47a9-8a8c-d01c57f8dd71","Type":"ContainerDied","Data":"090005c53702850a249d5b313f6fe6741e475ae68e85871f129d47bafdd137da"} Jan 31 05:00:51 crc kubenswrapper[4832]: I0131 05:00:51.996222 4832 generic.go:334] "Generic (PLEG): container finished" podID="d9b042eb-c890-4f83-96c3-a1f2fbd6d712" containerID="a5493cc636deef6707a12f3b7e5907e8e15978c00f318e2d7df3142bbe7d8a2b" exitCode=0 Jan 31 05:00:51 crc kubenswrapper[4832]: I0131 05:00:51.996267 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qbknh" event={"ID":"d9b042eb-c890-4f83-96c3-a1f2fbd6d712","Type":"ContainerDied","Data":"a5493cc636deef6707a12f3b7e5907e8e15978c00f318e2d7df3142bbe7d8a2b"} Jan 31 05:00:51 crc kubenswrapper[4832]: I0131 05:00:51.999354 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"5557b47adb87e3fdf836ac9b8120ad9186db115b544cb6c343e4d9347f22d55f"} Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.002883 4832 generic.go:334] "Generic (PLEG): container finished" podID="17737fb8-509f-41f5-ba32-e7079f79839c" containerID="0627e10c9e962a1dc681a4ca9b3c1bedc80be1ead41b0d7c81e4b67e50bd6728" exitCode=0 Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.002971 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-25g4q" event={"ID":"17737fb8-509f-41f5-ba32-e7079f79839c","Type":"ContainerDied","Data":"0627e10c9e962a1dc681a4ca9b3c1bedc80be1ead41b0d7c81e4b67e50bd6728"} Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.005424 4832 generic.go:334] "Generic (PLEG): container finished" podID="25b8bcc6-a968-4bfc-ba6c-7431d3f41deb" containerID="520cceb62a925e47641f0ea1f6b1a784ebe340c38e4c4d88f6b0fbeeec04936d" exitCode=0 Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.005525 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m2t2g" event={"ID":"25b8bcc6-a968-4bfc-ba6c-7431d3f41deb","Type":"ContainerDied","Data":"520cceb62a925e47641f0ea1f6b1a784ebe340c38e4c4d88f6b0fbeeec04936d"} Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.572527 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.709511 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-run-ovn\") pod \"79a497fa-abcb-449a-981b-61339f3bd3eb\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.709901 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "79a497fa-abcb-449a-981b-61339f3bd3eb" (UID: "79a497fa-abcb-449a-981b-61339f3bd3eb"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.710802 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/79a497fa-abcb-449a-981b-61339f3bd3eb-additional-scripts\") pod \"79a497fa-abcb-449a-981b-61339f3bd3eb\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.710854 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78cmg\" (UniqueName: \"kubernetes.io/projected/79a497fa-abcb-449a-981b-61339f3bd3eb-kube-api-access-78cmg\") pod \"79a497fa-abcb-449a-981b-61339f3bd3eb\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.710884 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-log-ovn\") pod \"79a497fa-abcb-449a-981b-61339f3bd3eb\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.710924 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79a497fa-abcb-449a-981b-61339f3bd3eb-scripts\") pod \"79a497fa-abcb-449a-981b-61339f3bd3eb\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.711115 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-run\") pod \"79a497fa-abcb-449a-981b-61339f3bd3eb\" (UID: \"79a497fa-abcb-449a-981b-61339f3bd3eb\") " Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.711363 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "79a497fa-abcb-449a-981b-61339f3bd3eb" (UID: "79a497fa-abcb-449a-981b-61339f3bd3eb"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.711454 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-run" (OuterVolumeSpecName: "var-run") pod "79a497fa-abcb-449a-981b-61339f3bd3eb" (UID: "79a497fa-abcb-449a-981b-61339f3bd3eb"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.711914 4832 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-run\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.711929 4832 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.711940 4832 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/79a497fa-abcb-449a-981b-61339f3bd3eb-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.712551 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79a497fa-abcb-449a-981b-61339f3bd3eb-scripts" (OuterVolumeSpecName: "scripts") pod "79a497fa-abcb-449a-981b-61339f3bd3eb" (UID: "79a497fa-abcb-449a-981b-61339f3bd3eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.712731 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79a497fa-abcb-449a-981b-61339f3bd3eb-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "79a497fa-abcb-449a-981b-61339f3bd3eb" (UID: "79a497fa-abcb-449a-981b-61339f3bd3eb"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.719757 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79a497fa-abcb-449a-981b-61339f3bd3eb-kube-api-access-78cmg" (OuterVolumeSpecName: "kube-api-access-78cmg") pod "79a497fa-abcb-449a-981b-61339f3bd3eb" (UID: "79a497fa-abcb-449a-981b-61339f3bd3eb"). InnerVolumeSpecName "kube-api-access-78cmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.814838 4832 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/79a497fa-abcb-449a-981b-61339f3bd3eb-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.814873 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78cmg\" (UniqueName: \"kubernetes.io/projected/79a497fa-abcb-449a-981b-61339f3bd3eb-kube-api-access-78cmg\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:52 crc kubenswrapper[4832]: I0131 05:00:52.814885 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79a497fa-abcb-449a-981b-61339f3bd3eb-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.017733 4832 generic.go:334] "Generic (PLEG): container finished" podID="dac9656f-3e3a-486c-a90e-94162a824223" containerID="0c85861e44b44826382484c91f4f82ed13a1eb4bdd6030f2582e4d436b533947" exitCode=0 Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.017809 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2130-account-create-update-hsf5h" event={"ID":"dac9656f-3e3a-486c-a90e-94162a824223","Type":"ContainerDied","Data":"0c85861e44b44826382484c91f4f82ed13a1eb4bdd6030f2582e4d436b533947"} Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.038115 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"2ac849e4eb01fb550d1c319443b996528be2719160409fa1b0be9c43bafdb333"} Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.038179 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"3c8149993ce853d6dc603bb8793041988ecebe41893e92127b8dd6f7b3c32151"} Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.038198 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"2f865cd96d43ce2a54e6d51621878a7cdb0d123f2eb53184a071b8fdbc023e56"} Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.041235 4832 generic.go:334] "Generic (PLEG): container finished" podID="595cc66d-63ae-4fe8-bf8a-457a6dd22bdf" containerID="0eb48ae8fc75cfa70b4a70909ba66ff101a56e28766e4650ea85ff0c0d0018f3" exitCode=0 Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.041317 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1920-account-create-update-kjz5k" event={"ID":"595cc66d-63ae-4fe8-bf8a-457a6dd22bdf","Type":"ContainerDied","Data":"0eb48ae8fc75cfa70b4a70909ba66ff101a56e28766e4650ea85ff0c0d0018f3"} Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.058936 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8sq59-config-zltv2" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.058999 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8sq59-config-zltv2" event={"ID":"79a497fa-abcb-449a-981b-61339f3bd3eb","Type":"ContainerDied","Data":"ce1f260c1bacf2a3bc2ab931116187b12b28857b36e62338ff9977cb9da41632"} Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.059052 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce1f260c1bacf2a3bc2ab931116187b12b28857b36e62338ff9977cb9da41632" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.061651 4832 generic.go:334] "Generic (PLEG): container finished" podID="2f6fea4b-c60e-4068-9814-50caf6f127aa" containerID="494c402f53aab4347a291134569f5319b7c69ad36ad5a3ede27303b93b38cc1c" exitCode=0 Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.061887 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a5be-account-create-update-r78dv" event={"ID":"2f6fea4b-c60e-4068-9814-50caf6f127aa","Type":"ContainerDied","Data":"494c402f53aab4347a291134569f5319b7c69ad36ad5a3ede27303b93b38cc1c"} Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.118262 4832 status_manager.go:907] "Failed to delete status for pod" pod="openstack/ovn-controller-8sq59-config-zltv2" err="pods \"ovn-controller-8sq59-config-zltv2\" not found" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.138664 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8sq59-config-zltv2"] Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.167550 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8sq59-config-zltv2"] Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.325061 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8sq59-config-p49cd"] Jan 31 05:00:53 crc kubenswrapper[4832]: E0131 05:00:53.325797 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79a497fa-abcb-449a-981b-61339f3bd3eb" containerName="ovn-config" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.325818 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="79a497fa-abcb-449a-981b-61339f3bd3eb" containerName="ovn-config" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.326049 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="79a497fa-abcb-449a-981b-61339f3bd3eb" containerName="ovn-config" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.327270 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.330811 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.361518 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8sq59-config-p49cd"] Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.377482 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qbknh" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.429117 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-run\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.429643 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6425d5e5-1736-4bcb-97f6-d362014ae7df-scripts\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.429692 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6425d5e5-1736-4bcb-97f6-d362014ae7df-additional-scripts\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.429753 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-run-ovn\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.429784 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8wm4\" (UniqueName: \"kubernetes.io/projected/6425d5e5-1736-4bcb-97f6-d362014ae7df-kube-api-access-k8wm4\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.429810 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-log-ovn\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.531040 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b042eb-c890-4f83-96c3-a1f2fbd6d712-operator-scripts\") pod \"d9b042eb-c890-4f83-96c3-a1f2fbd6d712\" (UID: \"d9b042eb-c890-4f83-96c3-a1f2fbd6d712\") " Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.531111 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9qm2\" (UniqueName: \"kubernetes.io/projected/d9b042eb-c890-4f83-96c3-a1f2fbd6d712-kube-api-access-l9qm2\") pod \"d9b042eb-c890-4f83-96c3-a1f2fbd6d712\" (UID: \"d9b042eb-c890-4f83-96c3-a1f2fbd6d712\") " Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.535502 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9b042eb-c890-4f83-96c3-a1f2fbd6d712-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d9b042eb-c890-4f83-96c3-a1f2fbd6d712" (UID: "d9b042eb-c890-4f83-96c3-a1f2fbd6d712"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.536531 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8wm4\" (UniqueName: \"kubernetes.io/projected/6425d5e5-1736-4bcb-97f6-d362014ae7df-kube-api-access-k8wm4\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.551831 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b042eb-c890-4f83-96c3-a1f2fbd6d712-kube-api-access-l9qm2" (OuterVolumeSpecName: "kube-api-access-l9qm2") pod "d9b042eb-c890-4f83-96c3-a1f2fbd6d712" (UID: "d9b042eb-c890-4f83-96c3-a1f2fbd6d712"). InnerVolumeSpecName "kube-api-access-l9qm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.553965 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-log-ovn\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.554094 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-run\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.555392 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-log-ovn\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.555482 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-run\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.558325 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6425d5e5-1736-4bcb-97f6-d362014ae7df-scripts\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.561114 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8wm4\" (UniqueName: \"kubernetes.io/projected/6425d5e5-1736-4bcb-97f6-d362014ae7df-kube-api-access-k8wm4\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.562909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6425d5e5-1736-4bcb-97f6-d362014ae7df-scripts\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.563110 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6425d5e5-1736-4bcb-97f6-d362014ae7df-additional-scripts\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.563342 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-run-ovn\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.563512 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b042eb-c890-4f83-96c3-a1f2fbd6d712-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.563526 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9qm2\" (UniqueName: \"kubernetes.io/projected/d9b042eb-c890-4f83-96c3-a1f2fbd6d712-kube-api-access-l9qm2\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.563636 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-run-ovn\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.564337 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6425d5e5-1736-4bcb-97f6-d362014ae7df-additional-scripts\") pod \"ovn-controller-8sq59-config-p49cd\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.653234 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.701054 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m2t2g" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.708291 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-25g4q" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.758018 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t6rff" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.768779 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lghhf\" (UniqueName: \"kubernetes.io/projected/25b8bcc6-a968-4bfc-ba6c-7431d3f41deb-kube-api-access-lghhf\") pod \"25b8bcc6-a968-4bfc-ba6c-7431d3f41deb\" (UID: \"25b8bcc6-a968-4bfc-ba6c-7431d3f41deb\") " Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.768869 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17737fb8-509f-41f5-ba32-e7079f79839c-operator-scripts\") pod \"17737fb8-509f-41f5-ba32-e7079f79839c\" (UID: \"17737fb8-509f-41f5-ba32-e7079f79839c\") " Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.768996 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4vml\" (UniqueName: \"kubernetes.io/projected/17737fb8-509f-41f5-ba32-e7079f79839c-kube-api-access-r4vml\") pod \"17737fb8-509f-41f5-ba32-e7079f79839c\" (UID: \"17737fb8-509f-41f5-ba32-e7079f79839c\") " Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.769117 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b8bcc6-a968-4bfc-ba6c-7431d3f41deb-operator-scripts\") pod \"25b8bcc6-a968-4bfc-ba6c-7431d3f41deb\" (UID: \"25b8bcc6-a968-4bfc-ba6c-7431d3f41deb\") " Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.769531 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17737fb8-509f-41f5-ba32-e7079f79839c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17737fb8-509f-41f5-ba32-e7079f79839c" (UID: "17737fb8-509f-41f5-ba32-e7079f79839c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.770528 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b8bcc6-a968-4bfc-ba6c-7431d3f41deb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25b8bcc6-a968-4bfc-ba6c-7431d3f41deb" (UID: "25b8bcc6-a968-4bfc-ba6c-7431d3f41deb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.775091 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17737fb8-509f-41f5-ba32-e7079f79839c-kube-api-access-r4vml" (OuterVolumeSpecName: "kube-api-access-r4vml") pod "17737fb8-509f-41f5-ba32-e7079f79839c" (UID: "17737fb8-509f-41f5-ba32-e7079f79839c"). InnerVolumeSpecName "kube-api-access-r4vml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.777533 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25b8bcc6-a968-4bfc-ba6c-7431d3f41deb-kube-api-access-lghhf" (OuterVolumeSpecName: "kube-api-access-lghhf") pod "25b8bcc6-a968-4bfc-ba6c-7431d3f41deb" (UID: "25b8bcc6-a968-4bfc-ba6c-7431d3f41deb"). InnerVolumeSpecName "kube-api-access-lghhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.874455 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8216556-80f1-47a9-8a8c-d01c57f8dd71-operator-scripts\") pod \"e8216556-80f1-47a9-8a8c-d01c57f8dd71\" (UID: \"e8216556-80f1-47a9-8a8c-d01c57f8dd71\") " Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.874521 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbhsg\" (UniqueName: \"kubernetes.io/projected/e8216556-80f1-47a9-8a8c-d01c57f8dd71-kube-api-access-jbhsg\") pod \"e8216556-80f1-47a9-8a8c-d01c57f8dd71\" (UID: \"e8216556-80f1-47a9-8a8c-d01c57f8dd71\") " Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.874990 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lghhf\" (UniqueName: \"kubernetes.io/projected/25b8bcc6-a968-4bfc-ba6c-7431d3f41deb-kube-api-access-lghhf\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.875010 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17737fb8-509f-41f5-ba32-e7079f79839c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.875021 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4vml\" (UniqueName: \"kubernetes.io/projected/17737fb8-509f-41f5-ba32-e7079f79839c-kube-api-access-r4vml\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.875032 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b8bcc6-a968-4bfc-ba6c-7431d3f41deb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.875379 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8216556-80f1-47a9-8a8c-d01c57f8dd71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8216556-80f1-47a9-8a8c-d01c57f8dd71" (UID: "e8216556-80f1-47a9-8a8c-d01c57f8dd71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.878477 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8216556-80f1-47a9-8a8c-d01c57f8dd71-kube-api-access-jbhsg" (OuterVolumeSpecName: "kube-api-access-jbhsg") pod "e8216556-80f1-47a9-8a8c-d01c57f8dd71" (UID: "e8216556-80f1-47a9-8a8c-d01c57f8dd71"). InnerVolumeSpecName "kube-api-access-jbhsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.884291 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79a497fa-abcb-449a-981b-61339f3bd3eb" path="/var/lib/kubelet/pods/79a497fa-abcb-449a-981b-61339f3bd3eb/volumes" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.976790 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8216556-80f1-47a9-8a8c-d01c57f8dd71-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:53 crc kubenswrapper[4832]: I0131 05:00:53.976824 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbhsg\" (UniqueName: \"kubernetes.io/projected/e8216556-80f1-47a9-8a8c-d01c57f8dd71-kube-api-access-jbhsg\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:54 crc kubenswrapper[4832]: I0131 05:00:54.075754 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-25g4q" event={"ID":"17737fb8-509f-41f5-ba32-e7079f79839c","Type":"ContainerDied","Data":"9e0e81086e233a19595d8584f86daea7843cf62b41abed7e015315a5664208f9"} Jan 31 05:00:54 crc kubenswrapper[4832]: I0131 05:00:54.075796 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e0e81086e233a19595d8584f86daea7843cf62b41abed7e015315a5664208f9" Jan 31 05:00:54 crc kubenswrapper[4832]: I0131 05:00:54.075871 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-25g4q" Jan 31 05:00:54 crc kubenswrapper[4832]: I0131 05:00:54.087306 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m2t2g" event={"ID":"25b8bcc6-a968-4bfc-ba6c-7431d3f41deb","Type":"ContainerDied","Data":"08d02cf98434fbaf3c5caf2bda22cee72b0f35a78df7ff1dc3e504ec0a3cce31"} Jan 31 05:00:54 crc kubenswrapper[4832]: I0131 05:00:54.087337 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08d02cf98434fbaf3c5caf2bda22cee72b0f35a78df7ff1dc3e504ec0a3cce31" Jan 31 05:00:54 crc kubenswrapper[4832]: I0131 05:00:54.087373 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m2t2g" Jan 31 05:00:54 crc kubenswrapper[4832]: I0131 05:00:54.089578 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t6rff" event={"ID":"e8216556-80f1-47a9-8a8c-d01c57f8dd71","Type":"ContainerDied","Data":"60689da87a23a03bbe2c951a8714b4fbd0f4f11db4e7533d061efdb04d43c61d"} Jan 31 05:00:54 crc kubenswrapper[4832]: I0131 05:00:54.089586 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t6rff" Jan 31 05:00:54 crc kubenswrapper[4832]: I0131 05:00:54.089615 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60689da87a23a03bbe2c951a8714b4fbd0f4f11db4e7533d061efdb04d43c61d" Jan 31 05:00:54 crc kubenswrapper[4832]: I0131 05:00:54.095103 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qbknh" Jan 31 05:00:54 crc kubenswrapper[4832]: I0131 05:00:54.095656 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qbknh" event={"ID":"d9b042eb-c890-4f83-96c3-a1f2fbd6d712","Type":"ContainerDied","Data":"016e09da1e744e4ac9f5f5e3cdbfc5bc4347262b2e1c5cbf7eb81bbdd0de2732"} Jan 31 05:00:54 crc kubenswrapper[4832]: I0131 05:00:54.095680 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="016e09da1e744e4ac9f5f5e3cdbfc5bc4347262b2e1c5cbf7eb81bbdd0de2732" Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.129925 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2130-account-create-update-hsf5h" Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.131395 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2130-account-create-update-hsf5h" event={"ID":"dac9656f-3e3a-486c-a90e-94162a824223","Type":"ContainerDied","Data":"2734d428d005ee0d2c48ec5ed539ba74414638930efa862c42720a3c910978e5"} Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.131421 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2734d428d005ee0d2c48ec5ed539ba74414638930efa862c42720a3c910978e5" Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.260740 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac9656f-3e3a-486c-a90e-94162a824223-operator-scripts\") pod \"dac9656f-3e3a-486c-a90e-94162a824223\" (UID: \"dac9656f-3e3a-486c-a90e-94162a824223\") " Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.262477 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l796t\" (UniqueName: \"kubernetes.io/projected/dac9656f-3e3a-486c-a90e-94162a824223-kube-api-access-l796t\") pod \"dac9656f-3e3a-486c-a90e-94162a824223\" (UID: \"dac9656f-3e3a-486c-a90e-94162a824223\") " Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.262310 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dac9656f-3e3a-486c-a90e-94162a824223-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dac9656f-3e3a-486c-a90e-94162a824223" (UID: "dac9656f-3e3a-486c-a90e-94162a824223"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.273025 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac9656f-3e3a-486c-a90e-94162a824223-kube-api-access-l796t" (OuterVolumeSpecName: "kube-api-access-l796t") pod "dac9656f-3e3a-486c-a90e-94162a824223" (UID: "dac9656f-3e3a-486c-a90e-94162a824223"). InnerVolumeSpecName "kube-api-access-l796t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.365682 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac9656f-3e3a-486c-a90e-94162a824223-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.365718 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l796t\" (UniqueName: \"kubernetes.io/projected/dac9656f-3e3a-486c-a90e-94162a824223-kube-api-access-l796t\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.568408 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8sq59-config-p49cd"] Jan 31 05:00:57 crc kubenswrapper[4832]: W0131 05:00:57.746639 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6425d5e5_1736_4bcb_97f6_d362014ae7df.slice/crio-da4df426e0d021d9af62f8fdc7b8c6dbbea5c388dc8a1a0d7b0912b39c25f667 WatchSource:0}: Error finding container da4df426e0d021d9af62f8fdc7b8c6dbbea5c388dc8a1a0d7b0912b39c25f667: Status 404 returned error can't find the container with id da4df426e0d021d9af62f8fdc7b8c6dbbea5c388dc8a1a0d7b0912b39c25f667 Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.848233 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1920-account-create-update-kjz5k" Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.861340 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a5be-account-create-update-r78dv" Jan 31 05:00:57 crc kubenswrapper[4832]: E0131 05:00:57.966774 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487160a9_724e_4892_a8e6_886547709572.slice/crio-5050fc9508d1662a638b588dc05e865586d33162592a490cbd9f6bb461fec559\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487160a9_724e_4892_a8e6_886547709572.slice\": RecentStats: unable to find data in memory cache]" Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.977285 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/595cc66d-63ae-4fe8-bf8a-457a6dd22bdf-operator-scripts\") pod \"595cc66d-63ae-4fe8-bf8a-457a6dd22bdf\" (UID: \"595cc66d-63ae-4fe8-bf8a-457a6dd22bdf\") " Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.977717 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg4kn\" (UniqueName: \"kubernetes.io/projected/595cc66d-63ae-4fe8-bf8a-457a6dd22bdf-kube-api-access-bg4kn\") pod \"595cc66d-63ae-4fe8-bf8a-457a6dd22bdf\" (UID: \"595cc66d-63ae-4fe8-bf8a-457a6dd22bdf\") " Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.977879 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f6fea4b-c60e-4068-9814-50caf6f127aa-operator-scripts\") pod \"2f6fea4b-c60e-4068-9814-50caf6f127aa\" (UID: \"2f6fea4b-c60e-4068-9814-50caf6f127aa\") " Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.978219 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7qrz\" (UniqueName: \"kubernetes.io/projected/2f6fea4b-c60e-4068-9814-50caf6f127aa-kube-api-access-k7qrz\") pod \"2f6fea4b-c60e-4068-9814-50caf6f127aa\" (UID: \"2f6fea4b-c60e-4068-9814-50caf6f127aa\") " Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.979450 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595cc66d-63ae-4fe8-bf8a-457a6dd22bdf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "595cc66d-63ae-4fe8-bf8a-457a6dd22bdf" (UID: "595cc66d-63ae-4fe8-bf8a-457a6dd22bdf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.979731 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f6fea4b-c60e-4068-9814-50caf6f127aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f6fea4b-c60e-4068-9814-50caf6f127aa" (UID: "2f6fea4b-c60e-4068-9814-50caf6f127aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.983407 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f6fea4b-c60e-4068-9814-50caf6f127aa-kube-api-access-k7qrz" (OuterVolumeSpecName: "kube-api-access-k7qrz") pod "2f6fea4b-c60e-4068-9814-50caf6f127aa" (UID: "2f6fea4b-c60e-4068-9814-50caf6f127aa"). InnerVolumeSpecName "kube-api-access-k7qrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:57 crc kubenswrapper[4832]: I0131 05:00:57.987177 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595cc66d-63ae-4fe8-bf8a-457a6dd22bdf-kube-api-access-bg4kn" (OuterVolumeSpecName: "kube-api-access-bg4kn") pod "595cc66d-63ae-4fe8-bf8a-457a6dd22bdf" (UID: "595cc66d-63ae-4fe8-bf8a-457a6dd22bdf"). InnerVolumeSpecName "kube-api-access-bg4kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:58 crc kubenswrapper[4832]: I0131 05:00:58.081919 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg4kn\" (UniqueName: \"kubernetes.io/projected/595cc66d-63ae-4fe8-bf8a-457a6dd22bdf-kube-api-access-bg4kn\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:58 crc kubenswrapper[4832]: I0131 05:00:58.082397 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f6fea4b-c60e-4068-9814-50caf6f127aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:58 crc kubenswrapper[4832]: I0131 05:00:58.082409 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7qrz\" (UniqueName: \"kubernetes.io/projected/2f6fea4b-c60e-4068-9814-50caf6f127aa-kube-api-access-k7qrz\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:58 crc kubenswrapper[4832]: I0131 05:00:58.082418 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/595cc66d-63ae-4fe8-bf8a-457a6dd22bdf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:58 crc kubenswrapper[4832]: I0131 05:00:58.147811 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1920-account-create-update-kjz5k" event={"ID":"595cc66d-63ae-4fe8-bf8a-457a6dd22bdf","Type":"ContainerDied","Data":"180a2114b466266b50deede026f0914d1b7de0ee00d5f280839c9fda56504e43"} Jan 31 05:00:58 crc kubenswrapper[4832]: I0131 05:00:58.147863 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="180a2114b466266b50deede026f0914d1b7de0ee00d5f280839c9fda56504e43" Jan 31 05:00:58 crc kubenswrapper[4832]: I0131 05:00:58.147940 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1920-account-create-update-kjz5k" Jan 31 05:00:58 crc kubenswrapper[4832]: I0131 05:00:58.152348 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8sq59-config-p49cd" event={"ID":"6425d5e5-1736-4bcb-97f6-d362014ae7df","Type":"ContainerStarted","Data":"da4df426e0d021d9af62f8fdc7b8c6dbbea5c388dc8a1a0d7b0912b39c25f667"} Jan 31 05:00:58 crc kubenswrapper[4832]: I0131 05:00:58.153794 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wrfck" event={"ID":"4d6dbe3e-1852-493a-926a-95d85495da09","Type":"ContainerStarted","Data":"bc9ccf48fe0badbd78f5b2ad5ea3f295a94ba77b3dc9155e92768422604bd507"} Jan 31 05:00:58 crc kubenswrapper[4832]: I0131 05:00:58.167467 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a5be-account-create-update-r78dv" Jan 31 05:00:58 crc kubenswrapper[4832]: I0131 05:00:58.167660 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a5be-account-create-update-r78dv" event={"ID":"2f6fea4b-c60e-4068-9814-50caf6f127aa","Type":"ContainerDied","Data":"920809ae62cc7b660c46dcfeb51c14874a5f0068d4964ec9048ddb473a65a124"} Jan 31 05:00:58 crc kubenswrapper[4832]: I0131 05:00:58.167723 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="920809ae62cc7b660c46dcfeb51c14874a5f0068d4964ec9048ddb473a65a124" Jan 31 05:00:58 crc kubenswrapper[4832]: I0131 05:00:58.184509 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2130-account-create-update-hsf5h" Jan 31 05:00:58 crc kubenswrapper[4832]: I0131 05:00:58.186989 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"20da39525b6df217f2faed4a707b7e3958cafca0a0ba65947127c743e027823c"} Jan 31 05:00:58 crc kubenswrapper[4832]: I0131 05:00:58.190965 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-wrfck" podStartSLOduration=2.364365832 podStartE2EDuration="10.190935139s" podCreationTimestamp="2026-01-31 05:00:48 +0000 UTC" firstStartedPulling="2026-01-31 05:00:50.008434804 +0000 UTC m=+1058.957256489" lastFinishedPulling="2026-01-31 05:00:57.835004101 +0000 UTC m=+1066.783825796" observedRunningTime="2026-01-31 05:00:58.176910303 +0000 UTC m=+1067.125731978" watchObservedRunningTime="2026-01-31 05:00:58.190935139 +0000 UTC m=+1067.139756824" Jan 31 05:00:59 crc kubenswrapper[4832]: I0131 05:00:59.198624 4832 generic.go:334] "Generic (PLEG): container finished" podID="ffef7600-94e5-444a-be7e-215a512c0233" containerID="bb0f0186b50c9eca7cc4e7874b8dd3d3df80f652db5e3cfed202595e39092956" exitCode=0 Jan 31 05:00:59 crc kubenswrapper[4832]: I0131 05:00:59.198924 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-92w4s" event={"ID":"ffef7600-94e5-444a-be7e-215a512c0233","Type":"ContainerDied","Data":"bb0f0186b50c9eca7cc4e7874b8dd3d3df80f652db5e3cfed202595e39092956"} Jan 31 05:00:59 crc kubenswrapper[4832]: I0131 05:00:59.203767 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"b5b4d025c20f581f70903ca616003571eafcbfc9456d722e6aae8e3dcd3e9abd"} Jan 31 05:00:59 crc kubenswrapper[4832]: I0131 05:00:59.203828 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"ded5de14059f5965d7a12227b12dafbb407cef1d369695283bf79fb227e265ef"} Jan 31 05:00:59 crc kubenswrapper[4832]: I0131 05:00:59.203843 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"13b5405b00c0c886fe116d685588686b06e302cc25f3dc75eb2fd36e24947b45"} Jan 31 05:00:59 crc kubenswrapper[4832]: I0131 05:00:59.209263 4832 generic.go:334] "Generic (PLEG): container finished" podID="6425d5e5-1736-4bcb-97f6-d362014ae7df" containerID="8d6bbe390a8c57797ffc70708f7575454e63eb59cde24adee45308efba4a18de" exitCode=0 Jan 31 05:00:59 crc kubenswrapper[4832]: I0131 05:00:59.209474 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8sq59-config-p49cd" event={"ID":"6425d5e5-1736-4bcb-97f6-d362014ae7df","Type":"ContainerDied","Data":"8d6bbe390a8c57797ffc70708f7575454e63eb59cde24adee45308efba4a18de"} Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.571120 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.667603 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-run\") pod \"6425d5e5-1736-4bcb-97f6-d362014ae7df\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.668162 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-run-ovn\") pod \"6425d5e5-1736-4bcb-97f6-d362014ae7df\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.668267 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8wm4\" (UniqueName: \"kubernetes.io/projected/6425d5e5-1736-4bcb-97f6-d362014ae7df-kube-api-access-k8wm4\") pod \"6425d5e5-1736-4bcb-97f6-d362014ae7df\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.668285 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-log-ovn\") pod \"6425d5e5-1736-4bcb-97f6-d362014ae7df\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.668321 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6425d5e5-1736-4bcb-97f6-d362014ae7df-scripts\") pod \"6425d5e5-1736-4bcb-97f6-d362014ae7df\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.668407 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6425d5e5-1736-4bcb-97f6-d362014ae7df-additional-scripts\") pod \"6425d5e5-1736-4bcb-97f6-d362014ae7df\" (UID: \"6425d5e5-1736-4bcb-97f6-d362014ae7df\") " Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.667873 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-run" (OuterVolumeSpecName: "var-run") pod "6425d5e5-1736-4bcb-97f6-d362014ae7df" (UID: "6425d5e5-1736-4bcb-97f6-d362014ae7df"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.669715 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6425d5e5-1736-4bcb-97f6-d362014ae7df-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6425d5e5-1736-4bcb-97f6-d362014ae7df" (UID: "6425d5e5-1736-4bcb-97f6-d362014ae7df"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.669740 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6425d5e5-1736-4bcb-97f6-d362014ae7df" (UID: "6425d5e5-1736-4bcb-97f6-d362014ae7df"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.670572 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6425d5e5-1736-4bcb-97f6-d362014ae7df-scripts" (OuterVolumeSpecName: "scripts") pod "6425d5e5-1736-4bcb-97f6-d362014ae7df" (UID: "6425d5e5-1736-4bcb-97f6-d362014ae7df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.670647 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6425d5e5-1736-4bcb-97f6-d362014ae7df" (UID: "6425d5e5-1736-4bcb-97f6-d362014ae7df"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.673495 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6425d5e5-1736-4bcb-97f6-d362014ae7df-kube-api-access-k8wm4" (OuterVolumeSpecName: "kube-api-access-k8wm4") pod "6425d5e5-1736-4bcb-97f6-d362014ae7df" (UID: "6425d5e5-1736-4bcb-97f6-d362014ae7df"). InnerVolumeSpecName "kube-api-access-k8wm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.679344 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-92w4s" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.769523 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74dgp\" (UniqueName: \"kubernetes.io/projected/ffef7600-94e5-444a-be7e-215a512c0233-kube-api-access-74dgp\") pod \"ffef7600-94e5-444a-be7e-215a512c0233\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.769604 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-config-data\") pod \"ffef7600-94e5-444a-be7e-215a512c0233\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.769637 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-combined-ca-bundle\") pod \"ffef7600-94e5-444a-be7e-215a512c0233\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.769723 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-db-sync-config-data\") pod \"ffef7600-94e5-444a-be7e-215a512c0233\" (UID: \"ffef7600-94e5-444a-be7e-215a512c0233\") " Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.769963 4832 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6425d5e5-1736-4bcb-97f6-d362014ae7df-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.769978 4832 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-run\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.769990 4832 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.770005 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8wm4\" (UniqueName: \"kubernetes.io/projected/6425d5e5-1736-4bcb-97f6-d362014ae7df-kube-api-access-k8wm4\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.770024 4832 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6425d5e5-1736-4bcb-97f6-d362014ae7df-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.770037 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6425d5e5-1736-4bcb-97f6-d362014ae7df-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.775864 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ffef7600-94e5-444a-be7e-215a512c0233" (UID: "ffef7600-94e5-444a-be7e-215a512c0233"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.778880 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffef7600-94e5-444a-be7e-215a512c0233-kube-api-access-74dgp" (OuterVolumeSpecName: "kube-api-access-74dgp") pod "ffef7600-94e5-444a-be7e-215a512c0233" (UID: "ffef7600-94e5-444a-be7e-215a512c0233"). InnerVolumeSpecName "kube-api-access-74dgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.796236 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffef7600-94e5-444a-be7e-215a512c0233" (UID: "ffef7600-94e5-444a-be7e-215a512c0233"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.817524 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-config-data" (OuterVolumeSpecName: "config-data") pod "ffef7600-94e5-444a-be7e-215a512c0233" (UID: "ffef7600-94e5-444a-be7e-215a512c0233"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.871115 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.871145 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.871155 4832 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ffef7600-94e5-444a-be7e-215a512c0233-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:00 crc kubenswrapper[4832]: I0131 05:01:00.871165 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74dgp\" (UniqueName: \"kubernetes.io/projected/ffef7600-94e5-444a-be7e-215a512c0233-kube-api-access-74dgp\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.241864 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"3b4a32e32e2bb72d78b57c9bc4af80021ac0f27c96cc023cb02b603d95b3d290"} Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.241918 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"7127114e2f2a3467fe9f99fe0ea0ee6a5680bf8cd9fea061fea12a9b3861dc50"} Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.241928 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"482c5c89d6e12b9681cb9bb409ef137d1810a6b97d622640b34283ef4dbefaaa"} Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.241937 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"db4c1ce8961ca12fe3b3d1634c18f11ceaf6ce6f6c14e080780630e81cbe9a55"} Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.244148 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8sq59-config-p49cd" event={"ID":"6425d5e5-1736-4bcb-97f6-d362014ae7df","Type":"ContainerDied","Data":"da4df426e0d021d9af62f8fdc7b8c6dbbea5c388dc8a1a0d7b0912b39c25f667"} Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.244197 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da4df426e0d021d9af62f8fdc7b8c6dbbea5c388dc8a1a0d7b0912b39c25f667" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.244759 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8sq59-config-p49cd" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.246183 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-92w4s" event={"ID":"ffef7600-94e5-444a-be7e-215a512c0233","Type":"ContainerDied","Data":"0111e56ea5be097eaa9f8223db2902db11d6c055c8f33e90a5d16d35ce00e09a"} Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.246219 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0111e56ea5be097eaa9f8223db2902db11d6c055c8f33e90a5d16d35ce00e09a" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.246302 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-92w4s" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.701671 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8sq59-config-p49cd"] Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.725655 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8sq59-config-p49cd"] Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.741647 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-75zn2"] Jan 31 05:01:01 crc kubenswrapper[4832]: E0131 05:01:01.742255 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac9656f-3e3a-486c-a90e-94162a824223" containerName="mariadb-account-create-update" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742282 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac9656f-3e3a-486c-a90e-94162a824223" containerName="mariadb-account-create-update" Jan 31 05:01:01 crc kubenswrapper[4832]: E0131 05:01:01.742300 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8216556-80f1-47a9-8a8c-d01c57f8dd71" containerName="mariadb-database-create" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742309 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8216556-80f1-47a9-8a8c-d01c57f8dd71" containerName="mariadb-database-create" Jan 31 05:01:01 crc kubenswrapper[4832]: E0131 05:01:01.742327 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595cc66d-63ae-4fe8-bf8a-457a6dd22bdf" containerName="mariadb-account-create-update" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742337 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="595cc66d-63ae-4fe8-bf8a-457a6dd22bdf" containerName="mariadb-account-create-update" Jan 31 05:01:01 crc kubenswrapper[4832]: E0131 05:01:01.742351 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17737fb8-509f-41f5-ba32-e7079f79839c" containerName="mariadb-database-create" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742360 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="17737fb8-509f-41f5-ba32-e7079f79839c" containerName="mariadb-database-create" Jan 31 05:01:01 crc kubenswrapper[4832]: E0131 05:01:01.742379 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6fea4b-c60e-4068-9814-50caf6f127aa" containerName="mariadb-account-create-update" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742389 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6fea4b-c60e-4068-9814-50caf6f127aa" containerName="mariadb-account-create-update" Jan 31 05:01:01 crc kubenswrapper[4832]: E0131 05:01:01.742400 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b8bcc6-a968-4bfc-ba6c-7431d3f41deb" containerName="mariadb-database-create" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742408 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b8bcc6-a968-4bfc-ba6c-7431d3f41deb" containerName="mariadb-database-create" Jan 31 05:01:01 crc kubenswrapper[4832]: E0131 05:01:01.742425 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffef7600-94e5-444a-be7e-215a512c0233" containerName="glance-db-sync" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742433 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffef7600-94e5-444a-be7e-215a512c0233" containerName="glance-db-sync" Jan 31 05:01:01 crc kubenswrapper[4832]: E0131 05:01:01.742451 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6425d5e5-1736-4bcb-97f6-d362014ae7df" containerName="ovn-config" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742459 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6425d5e5-1736-4bcb-97f6-d362014ae7df" containerName="ovn-config" Jan 31 05:01:01 crc kubenswrapper[4832]: E0131 05:01:01.742476 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9b042eb-c890-4f83-96c3-a1f2fbd6d712" containerName="mariadb-account-create-update" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742483 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9b042eb-c890-4f83-96c3-a1f2fbd6d712" containerName="mariadb-account-create-update" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742715 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9b042eb-c890-4f83-96c3-a1f2fbd6d712" containerName="mariadb-account-create-update" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742738 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="17737fb8-509f-41f5-ba32-e7079f79839c" containerName="mariadb-database-create" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742759 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="595cc66d-63ae-4fe8-bf8a-457a6dd22bdf" containerName="mariadb-account-create-update" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742769 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8216556-80f1-47a9-8a8c-d01c57f8dd71" containerName="mariadb-database-create" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742785 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac9656f-3e3a-486c-a90e-94162a824223" containerName="mariadb-account-create-update" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742798 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6fea4b-c60e-4068-9814-50caf6f127aa" containerName="mariadb-account-create-update" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742809 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffef7600-94e5-444a-be7e-215a512c0233" containerName="glance-db-sync" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742823 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6425d5e5-1736-4bcb-97f6-d362014ae7df" containerName="ovn-config" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.742836 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="25b8bcc6-a968-4bfc-ba6c-7431d3f41deb" containerName="mariadb-database-create" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.744127 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.760933 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-75zn2"] Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.893724 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-dns-svc\") pod \"dnsmasq-dns-74dc88fc-75zn2\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.893846 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-config\") pod \"dnsmasq-dns-74dc88fc-75zn2\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.893908 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-75zn2\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.893941 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-75zn2\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.893961 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bk88\" (UniqueName: \"kubernetes.io/projected/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-kube-api-access-9bk88\") pod \"dnsmasq-dns-74dc88fc-75zn2\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.900186 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6425d5e5-1736-4bcb-97f6-d362014ae7df" path="/var/lib/kubelet/pods/6425d5e5-1736-4bcb-97f6-d362014ae7df/volumes" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.996602 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-config\") pod \"dnsmasq-dns-74dc88fc-75zn2\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.996686 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-75zn2\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.996717 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-75zn2\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.996737 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bk88\" (UniqueName: \"kubernetes.io/projected/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-kube-api-access-9bk88\") pod \"dnsmasq-dns-74dc88fc-75zn2\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.996769 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-dns-svc\") pod \"dnsmasq-dns-74dc88fc-75zn2\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.997760 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-dns-svc\") pod \"dnsmasq-dns-74dc88fc-75zn2\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.998177 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-75zn2\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.998471 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-75zn2\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:01 crc kubenswrapper[4832]: I0131 05:01:01.998803 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-config\") pod \"dnsmasq-dns-74dc88fc-75zn2\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:02 crc kubenswrapper[4832]: I0131 05:01:02.026542 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bk88\" (UniqueName: \"kubernetes.io/projected/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-kube-api-access-9bk88\") pod \"dnsmasq-dns-74dc88fc-75zn2\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:02 crc kubenswrapper[4832]: I0131 05:01:02.085713 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:02 crc kubenswrapper[4832]: I0131 05:01:02.320082 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"cb9c2a2715c3d06345f02c7f82832bcc32a5533f41d32c01034950e0ffac7645"} Jan 31 05:01:02 crc kubenswrapper[4832]: I0131 05:01:02.320139 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"944b44061d16b1cd18f5eaef63885a017b27e1b9937c43a9a395af5676a82887"} Jan 31 05:01:02 crc kubenswrapper[4832]: I0131 05:01:02.840898 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-75zn2"] Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.331880 4832 generic.go:334] "Generic (PLEG): container finished" podID="5bd927a2-c7b0-4aa6-9185-a23a935b8fde" containerID="fe640a3978bfad8bc6149619da785934aea9dfb14d297575b65e99a89f132127" exitCode=0 Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.332179 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-75zn2" event={"ID":"5bd927a2-c7b0-4aa6-9185-a23a935b8fde","Type":"ContainerDied","Data":"fe640a3978bfad8bc6149619da785934aea9dfb14d297575b65e99a89f132127"} Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.332213 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-75zn2" event={"ID":"5bd927a2-c7b0-4aa6-9185-a23a935b8fde","Type":"ContainerStarted","Data":"3241735b880b2dcfd7ad60410db05ee2f4ce3f5b83b146f5ac0fbeddfa34786e"} Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.345793 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087","Type":"ContainerStarted","Data":"1d493567f94d2e3f884bfac68e4dee2f3f6a5bd4db3a9305ed396d2d82ac912c"} Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.354068 4832 generic.go:334] "Generic (PLEG): container finished" podID="4d6dbe3e-1852-493a-926a-95d85495da09" containerID="bc9ccf48fe0badbd78f5b2ad5ea3f295a94ba77b3dc9155e92768422604bd507" exitCode=0 Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.354133 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wrfck" event={"ID":"4d6dbe3e-1852-493a-926a-95d85495da09","Type":"ContainerDied","Data":"bc9ccf48fe0badbd78f5b2ad5ea3f295a94ba77b3dc9155e92768422604bd507"} Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.421532 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=28.576022203 podStartE2EDuration="40.421500898s" podCreationTimestamp="2026-01-31 05:00:23 +0000 UTC" firstStartedPulling="2026-01-31 05:00:48.307805999 +0000 UTC m=+1057.256627684" lastFinishedPulling="2026-01-31 05:01:00.153284694 +0000 UTC m=+1069.102106379" observedRunningTime="2026-01-31 05:01:03.414215811 +0000 UTC m=+1072.363037496" watchObservedRunningTime="2026-01-31 05:01:03.421500898 +0000 UTC m=+1072.370322583" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.751212 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-75zn2"] Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.795249 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-t7kvk"] Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.796610 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.798404 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.811619 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-t7kvk"] Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.890279 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.890353 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.890611 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.890709 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27mhw\" (UniqueName: \"kubernetes.io/projected/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-kube-api-access-27mhw\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.890968 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-config\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.891111 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.992734 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.992792 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.992822 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.992847 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27mhw\" (UniqueName: \"kubernetes.io/projected/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-kube-api-access-27mhw\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.993532 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-config\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.994232 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.994247 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.994173 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.994833 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.995544 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-config\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:03 crc kubenswrapper[4832]: I0131 05:01:03.996302 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:04 crc kubenswrapper[4832]: I0131 05:01:04.019828 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27mhw\" (UniqueName: \"kubernetes.io/projected/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-kube-api-access-27mhw\") pod \"dnsmasq-dns-5f59b8f679-t7kvk\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:04 crc kubenswrapper[4832]: I0131 05:01:04.115761 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:04 crc kubenswrapper[4832]: I0131 05:01:04.372371 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-75zn2" event={"ID":"5bd927a2-c7b0-4aa6-9185-a23a935b8fde","Type":"ContainerStarted","Data":"3d93a48b0327ff25a7fc6b95b58ea42d9562dbc528bfecd9074135b89a13df01"} Jan 31 05:01:04 crc kubenswrapper[4832]: I0131 05:01:04.373346 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:04 crc kubenswrapper[4832]: I0131 05:01:04.400672 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dc88fc-75zn2" podStartSLOduration=3.400649227 podStartE2EDuration="3.400649227s" podCreationTimestamp="2026-01-31 05:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:04.394894828 +0000 UTC m=+1073.343716533" watchObservedRunningTime="2026-01-31 05:01:04.400649227 +0000 UTC m=+1073.349470912" Jan 31 05:01:04 crc kubenswrapper[4832]: I0131 05:01:04.657952 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-t7kvk"] Jan 31 05:01:04 crc kubenswrapper[4832]: W0131 05:01:04.660616 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3a3c066_3065_4b65_9b5e_17ddb432a9aa.slice/crio-91a87b00a6836728eae0697e7507f3fa077e23cfd60f299bde1e97c20c424b53 WatchSource:0}: Error finding container 91a87b00a6836728eae0697e7507f3fa077e23cfd60f299bde1e97c20c424b53: Status 404 returned error can't find the container with id 91a87b00a6836728eae0697e7507f3fa077e23cfd60f299bde1e97c20c424b53 Jan 31 05:01:04 crc kubenswrapper[4832]: I0131 05:01:04.820730 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wrfck" Jan 31 05:01:04 crc kubenswrapper[4832]: I0131 05:01:04.915237 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6dbe3e-1852-493a-926a-95d85495da09-combined-ca-bundle\") pod \"4d6dbe3e-1852-493a-926a-95d85495da09\" (UID: \"4d6dbe3e-1852-493a-926a-95d85495da09\") " Jan 31 05:01:04 crc kubenswrapper[4832]: I0131 05:01:04.917155 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d6dbe3e-1852-493a-926a-95d85495da09-config-data\") pod \"4d6dbe3e-1852-493a-926a-95d85495da09\" (UID: \"4d6dbe3e-1852-493a-926a-95d85495da09\") " Jan 31 05:01:04 crc kubenswrapper[4832]: I0131 05:01:04.917929 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dg94\" (UniqueName: \"kubernetes.io/projected/4d6dbe3e-1852-493a-926a-95d85495da09-kube-api-access-6dg94\") pod \"4d6dbe3e-1852-493a-926a-95d85495da09\" (UID: \"4d6dbe3e-1852-493a-926a-95d85495da09\") " Jan 31 05:01:04 crc kubenswrapper[4832]: I0131 05:01:04.922293 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d6dbe3e-1852-493a-926a-95d85495da09-kube-api-access-6dg94" (OuterVolumeSpecName: "kube-api-access-6dg94") pod "4d6dbe3e-1852-493a-926a-95d85495da09" (UID: "4d6dbe3e-1852-493a-926a-95d85495da09"). InnerVolumeSpecName "kube-api-access-6dg94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:04 crc kubenswrapper[4832]: I0131 05:01:04.943501 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6dbe3e-1852-493a-926a-95d85495da09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d6dbe3e-1852-493a-926a-95d85495da09" (UID: "4d6dbe3e-1852-493a-926a-95d85495da09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:04 crc kubenswrapper[4832]: I0131 05:01:04.972540 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6dbe3e-1852-493a-926a-95d85495da09-config-data" (OuterVolumeSpecName: "config-data") pod "4d6dbe3e-1852-493a-926a-95d85495da09" (UID: "4d6dbe3e-1852-493a-926a-95d85495da09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.021280 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6dbe3e-1852-493a-926a-95d85495da09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.021316 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d6dbe3e-1852-493a-926a-95d85495da09-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.021327 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dg94\" (UniqueName: \"kubernetes.io/projected/4d6dbe3e-1852-493a-926a-95d85495da09-kube-api-access-6dg94\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.382188 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wrfck" event={"ID":"4d6dbe3e-1852-493a-926a-95d85495da09","Type":"ContainerDied","Data":"f1412f4986f3ddca552e2c93148d60103ee1f5678860c0664f2ee9e9e10cffb9"} Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.382706 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1412f4986f3ddca552e2c93148d60103ee1f5678860c0664f2ee9e9e10cffb9" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.382282 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wrfck" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.384185 4832 generic.go:334] "Generic (PLEG): container finished" podID="c3a3c066-3065-4b65-9b5e-17ddb432a9aa" containerID="8e226e26834a8f83757fa4f354600bd2eea22ad03f7083f7b2f169af15719e45" exitCode=0 Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.384235 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" event={"ID":"c3a3c066-3065-4b65-9b5e-17ddb432a9aa","Type":"ContainerDied","Data":"8e226e26834a8f83757fa4f354600bd2eea22ad03f7083f7b2f169af15719e45"} Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.384319 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" event={"ID":"c3a3c066-3065-4b65-9b5e-17ddb432a9aa","Type":"ContainerStarted","Data":"91a87b00a6836728eae0697e7507f3fa077e23cfd60f299bde1e97c20c424b53"} Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.384611 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dc88fc-75zn2" podUID="5bd927a2-c7b0-4aa6-9185-a23a935b8fde" containerName="dnsmasq-dns" containerID="cri-o://3d93a48b0327ff25a7fc6b95b58ea42d9562dbc528bfecd9074135b89a13df01" gracePeriod=10 Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.674700 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2g82r"] Jan 31 05:01:05 crc kubenswrapper[4832]: E0131 05:01:05.676047 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d6dbe3e-1852-493a-926a-95d85495da09" containerName="keystone-db-sync" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.677417 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d6dbe3e-1852-493a-926a-95d85495da09" containerName="keystone-db-sync" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.677765 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d6dbe3e-1852-493a-926a-95d85495da09" containerName="keystone-db-sync" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.678435 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.695723 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-t7kvk"] Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.697233 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.697374 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.697400 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.698009 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.705470 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7z4jm" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.716394 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2g82r"] Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.742509 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-credential-keys\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.742620 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2zqs\" (UniqueName: \"kubernetes.io/projected/65faf5b4-19a7-48d8-810f-04b1e09275dc-kube-api-access-j2zqs\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.742653 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-scripts\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.742696 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-config-data\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.742712 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-combined-ca-bundle\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.742735 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-fernet-keys\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.789644 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-wsc2h"] Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.791692 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.813390 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-wsc2h"] Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.844658 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.844736 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2zqs\" (UniqueName: \"kubernetes.io/projected/65faf5b4-19a7-48d8-810f-04b1e09275dc-kube-api-access-j2zqs\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.844766 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-scripts\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.844804 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.844824 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-config-data\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.844840 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-combined-ca-bundle\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.844860 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzbhc\" (UniqueName: \"kubernetes.io/projected/86aee0b2-18df-495b-b007-a8b8310af66a-kube-api-access-gzbhc\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.844878 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.844898 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-fernet-keys\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.844936 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.844969 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-credential-keys\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.844988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-config\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.857036 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-scripts\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.857197 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-credential-keys\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.861020 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-config-data\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.865862 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-combined-ca-bundle\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.878920 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-fernet-keys\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.884250 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2zqs\" (UniqueName: \"kubernetes.io/projected/65faf5b4-19a7-48d8-810f-04b1e09275dc-kube-api-access-j2zqs\") pod \"keystone-bootstrap-2g82r\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.955866 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.957207 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzbhc\" (UniqueName: \"kubernetes.io/projected/86aee0b2-18df-495b-b007-a8b8310af66a-kube-api-access-gzbhc\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.957297 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.957401 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.957494 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-config\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.957608 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.958488 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.959113 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.960040 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.960664 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.961265 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-config\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.976497 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-596b8df6f7-stcv6"] Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.981119 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.994083 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.996262 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.996626 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-s9vzj" Jan 31 05:01:05 crc kubenswrapper[4832]: I0131 05:01:05.997045 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.005626 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzbhc\" (UniqueName: \"kubernetes.io/projected/86aee0b2-18df-495b-b007-a8b8310af66a-kube-api-access-gzbhc\") pod \"dnsmasq-dns-bbf5cc879-wsc2h\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.018359 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-596b8df6f7-stcv6"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.046473 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7r8nc"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.047912 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.061506 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.062894 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/578c93b1-9bb7-4d11-9635-14805f3f91e1-config-data\") pod \"horizon-596b8df6f7-stcv6\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.063123 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/578c93b1-9bb7-4d11-9635-14805f3f91e1-logs\") pod \"horizon-596b8df6f7-stcv6\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.063225 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/578c93b1-9bb7-4d11-9635-14805f3f91e1-scripts\") pod \"horizon-596b8df6f7-stcv6\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.063440 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqpjq\" (UniqueName: \"kubernetes.io/projected/578c93b1-9bb7-4d11-9635-14805f3f91e1-kube-api-access-sqpjq\") pod \"horizon-596b8df6f7-stcv6\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.063486 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/578c93b1-9bb7-4d11-9635-14805f3f91e1-horizon-secret-key\") pod \"horizon-596b8df6f7-stcv6\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.063918 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wpsh4" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.064542 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.082247 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7r8nc"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.134205 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.138602 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.165336 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-scripts\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.165687 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-combined-ca-bundle\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.165794 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bcc76655-d4cd-47c7-be0c-21e52514fe92-etc-machine-id\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.165879 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/578c93b1-9bb7-4d11-9635-14805f3f91e1-config-data\") pod \"horizon-596b8df6f7-stcv6\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.165953 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/578c93b1-9bb7-4d11-9635-14805f3f91e1-logs\") pod \"horizon-596b8df6f7-stcv6\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.166038 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-db-sync-config-data\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.166112 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/578c93b1-9bb7-4d11-9635-14805f3f91e1-scripts\") pod \"horizon-596b8df6f7-stcv6\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.166200 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsdgj\" (UniqueName: \"kubernetes.io/projected/bcc76655-d4cd-47c7-be0c-21e52514fe92-kube-api-access-jsdgj\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.166272 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-config-data\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.166381 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqpjq\" (UniqueName: \"kubernetes.io/projected/578c93b1-9bb7-4d11-9635-14805f3f91e1-kube-api-access-sqpjq\") pod \"horizon-596b8df6f7-stcv6\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.166460 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/578c93b1-9bb7-4d11-9635-14805f3f91e1-horizon-secret-key\") pod \"horizon-596b8df6f7-stcv6\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.167390 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/578c93b1-9bb7-4d11-9635-14805f3f91e1-logs\") pod \"horizon-596b8df6f7-stcv6\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.168545 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/578c93b1-9bb7-4d11-9635-14805f3f91e1-config-data\") pod \"horizon-596b8df6f7-stcv6\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.169058 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/578c93b1-9bb7-4d11-9635-14805f3f91e1-scripts\") pod \"horizon-596b8df6f7-stcv6\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.183227 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/578c93b1-9bb7-4d11-9635-14805f3f91e1-horizon-secret-key\") pod \"horizon-596b8df6f7-stcv6\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.204600 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqpjq\" (UniqueName: \"kubernetes.io/projected/578c93b1-9bb7-4d11-9635-14805f3f91e1-kube-api-access-sqpjq\") pod \"horizon-596b8df6f7-stcv6\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.204676 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9jhp9"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.206584 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9jhp9" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.219999 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.221102 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.229094 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nqwks" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.233632 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-nvbkz"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.235067 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.245138 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.245494 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zf6cj" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.245670 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.274591 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-db-sync-config-data\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.274652 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsdgj\" (UniqueName: \"kubernetes.io/projected/bcc76655-d4cd-47c7-be0c-21e52514fe92-kube-api-access-jsdgj\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.274674 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-config-data\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.281074 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-scripts\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.281141 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-combined-ca-bundle\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.281204 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bcc76655-d4cd-47c7-be0c-21e52514fe92-etc-machine-id\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.281325 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bcc76655-d4cd-47c7-be0c-21e52514fe92-etc-machine-id\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.304552 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-combined-ca-bundle\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.311356 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-db-sync-config-data\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.311459 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9jhp9"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.323457 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-config-data\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.324154 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.327079 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-scripts\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.332508 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsdgj\" (UniqueName: \"kubernetes.io/projected/bcc76655-d4cd-47c7-be0c-21e52514fe92-kube-api-access-jsdgj\") pod \"cinder-db-sync-7r8nc\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.385189 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c30239-67eb-44a5-83cc-dbec561dade8-combined-ca-bundle\") pod \"neutron-db-sync-9jhp9\" (UID: \"14c30239-67eb-44a5-83cc-dbec561dade8\") " pod="openstack/neutron-db-sync-9jhp9" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.385293 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rdt4\" (UniqueName: \"kubernetes.io/projected/2a8a734d-c7f4-4fd7-b64f-a053592ee909-kube-api-access-5rdt4\") pod \"placement-db-sync-nvbkz\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.385324 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpfvq\" (UniqueName: \"kubernetes.io/projected/14c30239-67eb-44a5-83cc-dbec561dade8-kube-api-access-rpfvq\") pod \"neutron-db-sync-9jhp9\" (UID: \"14c30239-67eb-44a5-83cc-dbec561dade8\") " pod="openstack/neutron-db-sync-9jhp9" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.385381 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-scripts\") pod \"placement-db-sync-nvbkz\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.385410 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8a734d-c7f4-4fd7-b64f-a053592ee909-logs\") pod \"placement-db-sync-nvbkz\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.385441 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-config-data\") pod \"placement-db-sync-nvbkz\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.385474 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14c30239-67eb-44a5-83cc-dbec561dade8-config\") pod \"neutron-db-sync-9jhp9\" (UID: \"14c30239-67eb-44a5-83cc-dbec561dade8\") " pod="openstack/neutron-db-sync-9jhp9" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.385502 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-combined-ca-bundle\") pod \"placement-db-sync-nvbkz\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.395607 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-nvbkz"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.414401 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.418708 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-z4rhk"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.427594 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-z4rhk" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.437519 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fctwn" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.437800 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.490341 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-z4rhk"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.510286 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" event={"ID":"c3a3c066-3065-4b65-9b5e-17ddb432a9aa","Type":"ContainerStarted","Data":"0e658aae6c64f67ba15cb2a91784051f659657d0a0478e2b587965e0acf8bd43"} Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.510522 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" podUID="c3a3c066-3065-4b65-9b5e-17ddb432a9aa" containerName="dnsmasq-dns" containerID="cri-o://0e658aae6c64f67ba15cb2a91784051f659657d0a0478e2b587965e0acf8bd43" gracePeriod=10 Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.510869 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.523714 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-wsc2h"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.530106 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rdt4\" (UniqueName: \"kubernetes.io/projected/2a8a734d-c7f4-4fd7-b64f-a053592ee909-kube-api-access-5rdt4\") pod \"placement-db-sync-nvbkz\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.530167 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpfvq\" (UniqueName: \"kubernetes.io/projected/14c30239-67eb-44a5-83cc-dbec561dade8-kube-api-access-rpfvq\") pod \"neutron-db-sync-9jhp9\" (UID: \"14c30239-67eb-44a5-83cc-dbec561dade8\") " pod="openstack/neutron-db-sync-9jhp9" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.530301 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-scripts\") pod \"placement-db-sync-nvbkz\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.530535 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8a734d-c7f4-4fd7-b64f-a053592ee909-logs\") pod \"placement-db-sync-nvbkz\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.530601 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-config-data\") pod \"placement-db-sync-nvbkz\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.530650 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14c30239-67eb-44a5-83cc-dbec561dade8-config\") pod \"neutron-db-sync-9jhp9\" (UID: \"14c30239-67eb-44a5-83cc-dbec561dade8\") " pod="openstack/neutron-db-sync-9jhp9" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.530681 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-combined-ca-bundle\") pod \"placement-db-sync-nvbkz\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.530978 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c30239-67eb-44a5-83cc-dbec561dade8-combined-ca-bundle\") pod \"neutron-db-sync-9jhp9\" (UID: \"14c30239-67eb-44a5-83cc-dbec561dade8\") " pod="openstack/neutron-db-sync-9jhp9" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.577110 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8a734d-c7f4-4fd7-b64f-a053592ee909-logs\") pod \"placement-db-sync-nvbkz\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.603099 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-combined-ca-bundle\") pod \"placement-db-sync-nvbkz\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.620580 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/14c30239-67eb-44a5-83cc-dbec561dade8-config\") pod \"neutron-db-sync-9jhp9\" (UID: \"14c30239-67eb-44a5-83cc-dbec561dade8\") " pod="openstack/neutron-db-sync-9jhp9" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.623048 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-config-data\") pod \"placement-db-sync-nvbkz\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.624014 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rdt4\" (UniqueName: \"kubernetes.io/projected/2a8a734d-c7f4-4fd7-b64f-a053592ee909-kube-api-access-5rdt4\") pod \"placement-db-sync-nvbkz\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.624038 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-scripts\") pod \"placement-db-sync-nvbkz\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.637654 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl9hm\" (UniqueName: \"kubernetes.io/projected/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-kube-api-access-hl9hm\") pod \"barbican-db-sync-z4rhk\" (UID: \"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36\") " pod="openstack/barbican-db-sync-z4rhk" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.637845 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-db-sync-config-data\") pod \"barbican-db-sync-z4rhk\" (UID: \"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36\") " pod="openstack/barbican-db-sync-z4rhk" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.637905 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-combined-ca-bundle\") pod \"barbican-db-sync-z4rhk\" (UID: \"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36\") " pod="openstack/barbican-db-sync-z4rhk" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.735688 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c30239-67eb-44a5-83cc-dbec561dade8-combined-ca-bundle\") pod \"neutron-db-sync-9jhp9\" (UID: \"14c30239-67eb-44a5-83cc-dbec561dade8\") " pod="openstack/neutron-db-sync-9jhp9" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.745006 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpfvq\" (UniqueName: \"kubernetes.io/projected/14c30239-67eb-44a5-83cc-dbec561dade8-kube-api-access-rpfvq\") pod \"neutron-db-sync-9jhp9\" (UID: \"14c30239-67eb-44a5-83cc-dbec561dade8\") " pod="openstack/neutron-db-sync-9jhp9" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.748417 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-db-sync-config-data\") pod \"barbican-db-sync-z4rhk\" (UID: \"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36\") " pod="openstack/barbican-db-sync-z4rhk" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.756688 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-combined-ca-bundle\") pod \"barbican-db-sync-z4rhk\" (UID: \"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36\") " pod="openstack/barbican-db-sync-z4rhk" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.759540 4832 generic.go:334] "Generic (PLEG): container finished" podID="5bd927a2-c7b0-4aa6-9185-a23a935b8fde" containerID="3d93a48b0327ff25a7fc6b95b58ea42d9562dbc528bfecd9074135b89a13df01" exitCode=0 Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.759626 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-75zn2" event={"ID":"5bd927a2-c7b0-4aa6-9185-a23a935b8fde","Type":"ContainerDied","Data":"3d93a48b0327ff25a7fc6b95b58ea42d9562dbc528bfecd9074135b89a13df01"} Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.814179 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-75zn2" event={"ID":"5bd927a2-c7b0-4aa6-9185-a23a935b8fde","Type":"ContainerDied","Data":"3241735b880b2dcfd7ad60410db05ee2f4ce3f5b83b146f5ac0fbeddfa34786e"} Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.814220 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3241735b880b2dcfd7ad60410db05ee2f4ce3f5b83b146f5ac0fbeddfa34786e" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.768688 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-db-sync-config-data\") pod \"barbican-db-sync-z4rhk\" (UID: \"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36\") " pod="openstack/barbican-db-sync-z4rhk" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.795179 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.813750 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl9hm\" (UniqueName: \"kubernetes.io/projected/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-kube-api-access-hl9hm\") pod \"barbican-db-sync-z4rhk\" (UID: \"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36\") " pod="openstack/barbican-db-sync-z4rhk" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.814239 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f78656df-w96px"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.793060 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-combined-ca-bundle\") pod \"barbican-db-sync-z4rhk\" (UID: \"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36\") " pod="openstack/barbican-db-sync-z4rhk" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.781176 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:06 crc kubenswrapper[4832]: E0131 05:01:06.815297 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd927a2-c7b0-4aa6-9185-a23a935b8fde" containerName="dnsmasq-dns" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.815317 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd927a2-c7b0-4aa6-9185-a23a935b8fde" containerName="dnsmasq-dns" Jan 31 05:01:06 crc kubenswrapper[4832]: E0131 05:01:06.815347 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd927a2-c7b0-4aa6-9185-a23a935b8fde" containerName="init" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.815354 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd927a2-c7b0-4aa6-9185-a23a935b8fde" containerName="init" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.815578 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd927a2-c7b0-4aa6-9185-a23a935b8fde" containerName="dnsmasq-dns" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.831766 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f78656df-w96px"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.831815 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.832767 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.846816 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.847994 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.848500 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.852133 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.852937 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pnlwb" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.853238 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.856808 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.867432 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl9hm\" (UniqueName: \"kubernetes.io/projected/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-kube-api-access-hl9hm\") pod \"barbican-db-sync-z4rhk\" (UID: \"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36\") " pod="openstack/barbican-db-sync-z4rhk" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.879502 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-r442w"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.891327 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.891542 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.907435 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-r442w"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.907643 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.918066 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bk88\" (UniqueName: \"kubernetes.io/projected/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-kube-api-access-9bk88\") pod \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.918211 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-dns-svc\") pod \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.918278 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-ovsdbserver-sb\") pod \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.918307 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-ovsdbserver-nb\") pod \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.918385 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-config\") pod \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\" (UID: \"5bd927a2-c7b0-4aa6-9185-a23a935b8fde\") " Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.918856 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.918920 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65c55\" (UniqueName: \"kubernetes.io/projected/58cb7047-9e1e-46d8-82b4-59a1af9d0937-kube-api-access-65c55\") pod \"horizon-f78656df-w96px\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.918943 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46b0d335-0c75-4996-be63-bd416e988ced-log-httpd\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.918960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-config-data\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.918977 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-logs\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.918998 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58cb7047-9e1e-46d8-82b4-59a1af9d0937-config-data\") pod \"horizon-f78656df-w96px\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.919015 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-scripts\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.919053 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46b0d335-0c75-4996-be63-bd416e988ced-run-httpd\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.919086 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.919108 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzjpn\" (UniqueName: \"kubernetes.io/projected/46b0d335-0c75-4996-be63-bd416e988ced-kube-api-access-fzjpn\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.919132 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.919200 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-scripts\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.919220 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-config-data\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.919255 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58cb7047-9e1e-46d8-82b4-59a1af9d0937-scripts\") pod \"horizon-f78656df-w96px\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.919274 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.919299 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/58cb7047-9e1e-46d8-82b4-59a1af9d0937-horizon-secret-key\") pod \"horizon-f78656df-w96px\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.919319 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58cb7047-9e1e-46d8-82b4-59a1af9d0937-logs\") pod \"horizon-f78656df-w96px\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.919346 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.919366 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87m64\" (UniqueName: \"kubernetes.io/projected/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-kube-api-access-87m64\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.919401 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.933854 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-kube-api-access-9bk88" (OuterVolumeSpecName: "kube-api-access-9bk88") pod "5bd927a2-c7b0-4aa6-9185-a23a935b8fde" (UID: "5bd927a2-c7b0-4aa6-9185-a23a935b8fde"). InnerVolumeSpecName "kube-api-access-9bk88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.933932 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.974975 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 05:01:06 crc kubenswrapper[4832]: I0131 05:01:06.984898 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" podStartSLOduration=3.98487597 podStartE2EDuration="3.98487597s" podCreationTimestamp="2026-01-31 05:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:06.607932747 +0000 UTC m=+1075.556754432" watchObservedRunningTime="2026-01-31 05:01:06.98487597 +0000 UTC m=+1075.933697655" Jan 31 05:01:07 crc kubenswrapper[4832]: W0131 05:01:07.001290 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65faf5b4_19a7_48d8_810f_04b1e09275dc.slice/crio-9507f918457917c86870d880bcae5b8fb59f68b2723785a16e8d9344b4fba65f WatchSource:0}: Error finding container 9507f918457917c86870d880bcae5b8fb59f68b2723785a16e8d9344b4fba65f: Status 404 returned error can't find the container with id 9507f918457917c86870d880bcae5b8fb59f68b2723785a16e8d9344b4fba65f Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.002063 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5bd927a2-c7b0-4aa6-9185-a23a935b8fde" (UID: "5bd927a2-c7b0-4aa6-9185-a23a935b8fde"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023292 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87m64\" (UniqueName: \"kubernetes.io/projected/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-kube-api-access-87m64\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023359 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023390 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8ck2\" (UniqueName: \"kubernetes.io/projected/45635b27-9b86-4573-866f-74163da166b0-kube-api-access-z8ck2\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023425 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023471 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023536 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023629 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65c55\" (UniqueName: \"kubernetes.io/projected/58cb7047-9e1e-46d8-82b4-59a1af9d0937-kube-api-access-65c55\") pod \"horizon-f78656df-w96px\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023656 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46b0d335-0c75-4996-be63-bd416e988ced-log-httpd\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023676 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023698 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-config-data\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023718 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-logs\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023757 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58cb7047-9e1e-46d8-82b4-59a1af9d0937-config-data\") pod \"horizon-f78656df-w96px\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023776 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-scripts\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023812 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46b0d335-0c75-4996-be63-bd416e988ced-run-httpd\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023844 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023866 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzjpn\" (UniqueName: \"kubernetes.io/projected/46b0d335-0c75-4996-be63-bd416e988ced-kube-api-access-fzjpn\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023933 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-config\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.023957 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.024072 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.024133 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-scripts\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.024158 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-config-data\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.024213 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58cb7047-9e1e-46d8-82b4-59a1af9d0937-scripts\") pod \"horizon-f78656df-w96px\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.024233 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.024336 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/58cb7047-9e1e-46d8-82b4-59a1af9d0937-horizon-secret-key\") pod \"horizon-f78656df-w96px\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.024363 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58cb7047-9e1e-46d8-82b4-59a1af9d0937-logs\") pod \"horizon-f78656df-w96px\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.026760 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.027364 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.026156 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-logs\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.031062 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58cb7047-9e1e-46d8-82b4-59a1af9d0937-logs\") pod \"horizon-f78656df-w96px\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.031751 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46b0d335-0c75-4996-be63-bd416e988ced-run-httpd\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.031922 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46b0d335-0c75-4996-be63-bd416e988ced-log-httpd\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.035902 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58cb7047-9e1e-46d8-82b4-59a1af9d0937-config-data\") pod \"horizon-f78656df-w96px\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.038293 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.038772 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.039094 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bk88\" (UniqueName: \"kubernetes.io/projected/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-kube-api-access-9bk88\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.039126 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.039773 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58cb7047-9e1e-46d8-82b4-59a1af9d0937-scripts\") pod \"horizon-f78656df-w96px\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.047022 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9jhp9" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.049178 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-scripts\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.054163 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-config-data\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.058092 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.059581 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.061488 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.061952 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/58cb7047-9e1e-46d8-82b4-59a1af9d0937-horizon-secret-key\") pod \"horizon-f78656df-w96px\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.062255 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzjpn\" (UniqueName: \"kubernetes.io/projected/46b0d335-0c75-4996-be63-bd416e988ced-kube-api-access-fzjpn\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.065476 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.068802 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.069035 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.070978 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87m64\" (UniqueName: \"kubernetes.io/projected/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-kube-api-access-87m64\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.076327 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65c55\" (UniqueName: \"kubernetes.io/projected/58cb7047-9e1e-46d8-82b4-59a1af9d0937-kube-api-access-65c55\") pod \"horizon-f78656df-w96px\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.078736 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.079976 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-config-data\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.138058 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.154445 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.154535 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.154596 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.154624 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbp7s\" (UniqueName: \"kubernetes.io/projected/6f5548a4-e629-49b4-b3d8-cc6030c09439-kube-api-access-xbp7s\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.154650 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.154680 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.154714 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.154788 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-config\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.154830 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.154857 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f5548a4-e629-49b4-b3d8-cc6030c09439-logs\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.154895 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f5548a4-e629-49b4-b3d8-cc6030c09439-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.154969 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.154999 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8ck2\" (UniqueName: \"kubernetes.io/projected/45635b27-9b86-4573-866f-74163da166b0-kube-api-access-z8ck2\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.155022 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.156122 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.156852 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.156939 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-config\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.157942 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-z4rhk" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.158647 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.160649 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.169393 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5bd927a2-c7b0-4aa6-9185-a23a935b8fde" (UID: "5bd927a2-c7b0-4aa6-9185-a23a935b8fde"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.179683 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5bd927a2-c7b0-4aa6-9185-a23a935b8fde" (UID: "5bd927a2-c7b0-4aa6-9185-a23a935b8fde"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.185663 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.201318 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2g82r"] Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.201611 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-scripts\") pod \"ceilometer-0\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " pod="openstack/ceilometer-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.211773 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8ck2\" (UniqueName: \"kubernetes.io/projected/45635b27-9b86-4573-866f-74163da166b0-kube-api-access-z8ck2\") pod \"dnsmasq-dns-56df8fb6b7-r442w\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.216111 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.256799 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f5548a4-e629-49b4-b3d8-cc6030c09439-logs\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.256889 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f5548a4-e629-49b4-b3d8-cc6030c09439-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.256950 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.257064 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.257087 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.257104 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbp7s\" (UniqueName: \"kubernetes.io/projected/6f5548a4-e629-49b4-b3d8-cc6030c09439-kube-api-access-xbp7s\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.257129 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.257193 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.257351 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.257375 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.258983 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f5548a4-e629-49b4-b3d8-cc6030c09439-logs\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.261541 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f5548a4-e629-49b4-b3d8-cc6030c09439-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.266799 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.275815 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.275846 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.276247 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.276773 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.287639 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-config" (OuterVolumeSpecName: "config") pod "5bd927a2-c7b0-4aa6-9185-a23a935b8fde" (UID: "5bd927a2-c7b0-4aa6-9185-a23a935b8fde"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.290632 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbp7s\" (UniqueName: \"kubernetes.io/projected/6f5548a4-e629-49b4-b3d8-cc6030c09439-kube-api-access-xbp7s\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.291430 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.318701 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.324044 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-wsc2h"] Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.350494 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.361382 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bd927a2-c7b0-4aa6-9185-a23a935b8fde-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.374312 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.454525 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.689546 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7r8nc"] Jan 31 05:01:07 crc kubenswrapper[4832]: W0131 05:01:07.708905 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod578c93b1_9bb7_4d11_9635_14805f3f91e1.slice/crio-bda3ccd34518fdd86452151b869da1d6dfcc2cc3457426b09ab008c3919cf484 WatchSource:0}: Error finding container bda3ccd34518fdd86452151b869da1d6dfcc2cc3457426b09ab008c3919cf484: Status 404 returned error can't find the container with id bda3ccd34518fdd86452151b869da1d6dfcc2cc3457426b09ab008c3919cf484 Jan 31 05:01:07 crc kubenswrapper[4832]: W0131 05:01:07.711308 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcc76655_d4cd_47c7_be0c_21e52514fe92.slice/crio-793fb11ab0e33e8b5dcdcd8e8ae68b5062c86f627919e7e5c8ae97483648f432 WatchSource:0}: Error finding container 793fb11ab0e33e8b5dcdcd8e8ae68b5062c86f627919e7e5c8ae97483648f432: Status 404 returned error can't find the container with id 793fb11ab0e33e8b5dcdcd8e8ae68b5062c86f627919e7e5c8ae97483648f432 Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.748314 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-596b8df6f7-stcv6"] Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.850533 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-596b8df6f7-stcv6" event={"ID":"578c93b1-9bb7-4d11-9635-14805f3f91e1","Type":"ContainerStarted","Data":"bda3ccd34518fdd86452151b869da1d6dfcc2cc3457426b09ab008c3919cf484"} Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.854261 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2g82r" event={"ID":"65faf5b4-19a7-48d8-810f-04b1e09275dc","Type":"ContainerStarted","Data":"9507f918457917c86870d880bcae5b8fb59f68b2723785a16e8d9344b4fba65f"} Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.890160 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7r8nc" event={"ID":"bcc76655-d4cd-47c7-be0c-21e52514fe92","Type":"ContainerStarted","Data":"793fb11ab0e33e8b5dcdcd8e8ae68b5062c86f627919e7e5c8ae97483648f432"} Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.897722 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-75zn2" Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.897841 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" event={"ID":"86aee0b2-18df-495b-b007-a8b8310af66a","Type":"ContainerStarted","Data":"701671c68f9fed6b5e542f3c5bfa7f03b955080b99a98a85649900a033a8ae9e"} Jan 31 05:01:07 crc kubenswrapper[4832]: I0131 05:01:07.941993 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-nvbkz"] Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.014889 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-75zn2"] Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.029003 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-75zn2"] Jan 31 05:01:08 crc kubenswrapper[4832]: W0131 05:01:08.079416 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14c30239_67eb_44a5_83cc_dbec561dade8.slice/crio-34f4ae3191c55e016626777119bf1920a2f36e92d1b7ac84138f509f99beb2e0 WatchSource:0}: Error finding container 34f4ae3191c55e016626777119bf1920a2f36e92d1b7ac84138f509f99beb2e0: Status 404 returned error can't find the container with id 34f4ae3191c55e016626777119bf1920a2f36e92d1b7ac84138f509f99beb2e0 Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.089533 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9jhp9"] Jan 31 05:01:08 crc kubenswrapper[4832]: E0131 05:01:08.239143 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487160a9_724e_4892_a8e6_886547709572.slice/crio-5050fc9508d1662a638b588dc05e865586d33162592a490cbd9f6bb461fec559\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487160a9_724e_4892_a8e6_886547709572.slice\": RecentStats: unable to find data in memory cache]" Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.284365 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-z4rhk"] Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.386853 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.497965 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f78656df-w96px"] Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.523326 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.535330 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-r442w"] Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.667540 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.727886 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.759854 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-596b8df6f7-stcv6"] Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.794208 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.812347 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b849fc549-b2htl"] Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.814962 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.827116 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b849fc549-b2htl"] Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.862191 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.907164 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nvbkz" event={"ID":"2a8a734d-c7f4-4fd7-b64f-a053592ee909","Type":"ContainerStarted","Data":"8cfbbd7883dfd84f5308528f01a67baef7600d397b497d2c29b10b368df53c3d"} Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.908200 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9jhp9" event={"ID":"14c30239-67eb-44a5-83cc-dbec561dade8","Type":"ContainerStarted","Data":"34f4ae3191c55e016626777119bf1920a2f36e92d1b7ac84138f509f99beb2e0"} Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.924486 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e522509-0496-4e05-b5d0-935f5ef2fc75-config-data\") pod \"horizon-b849fc549-b2htl\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.924543 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgjcs\" (UniqueName: \"kubernetes.io/projected/9e522509-0496-4e05-b5d0-935f5ef2fc75-kube-api-access-qgjcs\") pod \"horizon-b849fc549-b2htl\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.924630 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e522509-0496-4e05-b5d0-935f5ef2fc75-scripts\") pod \"horizon-b849fc549-b2htl\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.924681 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e522509-0496-4e05-b5d0-935f5ef2fc75-logs\") pod \"horizon-b849fc549-b2htl\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:08 crc kubenswrapper[4832]: I0131 05:01:08.925031 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e522509-0496-4e05-b5d0-935f5ef2fc75-horizon-secret-key\") pod \"horizon-b849fc549-b2htl\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:09 crc kubenswrapper[4832]: I0131 05:01:09.029913 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e522509-0496-4e05-b5d0-935f5ef2fc75-logs\") pod \"horizon-b849fc549-b2htl\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:09 crc kubenswrapper[4832]: I0131 05:01:09.030045 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e522509-0496-4e05-b5d0-935f5ef2fc75-horizon-secret-key\") pod \"horizon-b849fc549-b2htl\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:09 crc kubenswrapper[4832]: I0131 05:01:09.030909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e522509-0496-4e05-b5d0-935f5ef2fc75-config-data\") pod \"horizon-b849fc549-b2htl\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:09 crc kubenswrapper[4832]: I0131 05:01:09.030954 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgjcs\" (UniqueName: \"kubernetes.io/projected/9e522509-0496-4e05-b5d0-935f5ef2fc75-kube-api-access-qgjcs\") pod \"horizon-b849fc549-b2htl\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:09 crc kubenswrapper[4832]: I0131 05:01:09.031806 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e522509-0496-4e05-b5d0-935f5ef2fc75-scripts\") pod \"horizon-b849fc549-b2htl\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:09 crc kubenswrapper[4832]: I0131 05:01:09.032209 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e522509-0496-4e05-b5d0-935f5ef2fc75-scripts\") pod \"horizon-b849fc549-b2htl\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:09 crc kubenswrapper[4832]: I0131 05:01:09.032610 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e522509-0496-4e05-b5d0-935f5ef2fc75-config-data\") pod \"horizon-b849fc549-b2htl\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:09 crc kubenswrapper[4832]: I0131 05:01:09.032863 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e522509-0496-4e05-b5d0-935f5ef2fc75-logs\") pod \"horizon-b849fc549-b2htl\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:09 crc kubenswrapper[4832]: I0131 05:01:09.050478 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e522509-0496-4e05-b5d0-935f5ef2fc75-horizon-secret-key\") pod \"horizon-b849fc549-b2htl\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:09 crc kubenswrapper[4832]: I0131 05:01:09.054217 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgjcs\" (UniqueName: \"kubernetes.io/projected/9e522509-0496-4e05-b5d0-935f5ef2fc75-kube-api-access-qgjcs\") pod \"horizon-b849fc549-b2htl\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:09 crc kubenswrapper[4832]: I0131 05:01:09.154070 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:09 crc kubenswrapper[4832]: I0131 05:01:09.882829 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bd927a2-c7b0-4aa6-9185-a23a935b8fde" path="/var/lib/kubelet/pods/5bd927a2-c7b0-4aa6-9185-a23a935b8fde/volumes" Jan 31 05:01:13 crc kubenswrapper[4832]: I0131 05:01:12.956080 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9jhp9" event={"ID":"14c30239-67eb-44a5-83cc-dbec561dade8","Type":"ContainerStarted","Data":"00a94f0854a8de1520bd9fd0bc51391272f8127faefbdd237bf99066c50d75f6"} Jan 31 05:01:13 crc kubenswrapper[4832]: I0131 05:01:12.958029 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2g82r" event={"ID":"65faf5b4-19a7-48d8-810f-04b1e09275dc","Type":"ContainerStarted","Data":"907d52516810dd9efc0434f02fd48f0280240b66254f860221c35ac7d4bd2bc5"} Jan 31 05:01:13 crc kubenswrapper[4832]: I0131 05:01:12.963069 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46b0d335-0c75-4996-be63-bd416e988ced","Type":"ContainerStarted","Data":"703f8a0ea9d4e5d0b4863b05f9a7f34d9998e6a51903236a4c40dbc3160f3e94"} Jan 31 05:01:13 crc kubenswrapper[4832]: I0131 05:01:12.964807 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f78656df-w96px" event={"ID":"58cb7047-9e1e-46d8-82b4-59a1af9d0937","Type":"ContainerStarted","Data":"b202175d9d4d8ade7c5cf79f7c849d3ea5adfafc9e6d140a947ca998ab3a19b6"} Jan 31 05:01:13 crc kubenswrapper[4832]: I0131 05:01:12.983449 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8","Type":"ContainerStarted","Data":"eafed0f7ad0184989a64a92502e903812eeb31c4f39d507e92ad7de43174bb6e"} Jan 31 05:01:13 crc kubenswrapper[4832]: I0131 05:01:12.988414 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" event={"ID":"45635b27-9b86-4573-866f-74163da166b0","Type":"ContainerStarted","Data":"039005b6b3adabbc0bcac6697541099c789176a1a1c5283de06a36e58dfd5384"} Jan 31 05:01:13 crc kubenswrapper[4832]: I0131 05:01:12.988478 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" event={"ID":"45635b27-9b86-4573-866f-74163da166b0","Type":"ContainerStarted","Data":"e740cb3551e662230f9190f195a720ac52c9bd48c02f77c66bff9c37bde1e563"} Jan 31 05:01:13 crc kubenswrapper[4832]: I0131 05:01:12.992691 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" event={"ID":"86aee0b2-18df-495b-b007-a8b8310af66a","Type":"ContainerStarted","Data":"e7fd43e61d744dac1d2a7b985ec9197fff266b8a854947af4bd3942703fffd55"} Jan 31 05:01:13 crc kubenswrapper[4832]: I0131 05:01:12.995085 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6f5548a4-e629-49b4-b3d8-cc6030c09439","Type":"ContainerStarted","Data":"fd57cbb18708eade0b04172180d7381e460d48e5a30b3bf0e5d173254687bc1f"} Jan 31 05:01:13 crc kubenswrapper[4832]: I0131 05:01:12.999117 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-z4rhk" event={"ID":"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36","Type":"ContainerStarted","Data":"ee4fa305932192f8a50cde760945ac74a8727e481f06d4fde33a4baa7277f531"} Jan 31 05:01:13 crc kubenswrapper[4832]: I0131 05:01:13.025323 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2g82r" podStartSLOduration=8.025299161 podStartE2EDuration="8.025299161s" podCreationTimestamp="2026-01-31 05:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:12.985678699 +0000 UTC m=+1081.934500444" watchObservedRunningTime="2026-01-31 05:01:13.025299161 +0000 UTC m=+1081.974120846" Jan 31 05:01:13 crc kubenswrapper[4832]: I0131 05:01:13.101688 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b849fc549-b2htl"] Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.042421 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b849fc549-b2htl" event={"ID":"9e522509-0496-4e05-b5d0-935f5ef2fc75","Type":"ContainerStarted","Data":"a6663683fa134f702993a86f0fbc6cfcce1904b18cee25d74061e75b85854799"} Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.052498 4832 generic.go:334] "Generic (PLEG): container finished" podID="86aee0b2-18df-495b-b007-a8b8310af66a" containerID="e7fd43e61d744dac1d2a7b985ec9197fff266b8a854947af4bd3942703fffd55" exitCode=0 Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.052625 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" event={"ID":"86aee0b2-18df-495b-b007-a8b8310af66a","Type":"ContainerDied","Data":"e7fd43e61d744dac1d2a7b985ec9197fff266b8a854947af4bd3942703fffd55"} Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.057966 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6f5548a4-e629-49b4-b3d8-cc6030c09439","Type":"ContainerStarted","Data":"a95cdf00525a4f9548bfaf29ef5789ff38ec56f3f4d0514070eefc2d2fb46d0c"} Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.069950 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8","Type":"ContainerStarted","Data":"c07a5fbc0dfcf56726139259c1d5555a37020e17397be1fd170417d79f5e8cac"} Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.074377 4832 generic.go:334] "Generic (PLEG): container finished" podID="45635b27-9b86-4573-866f-74163da166b0" containerID="039005b6b3adabbc0bcac6697541099c789176a1a1c5283de06a36e58dfd5384" exitCode=0 Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.074704 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" event={"ID":"45635b27-9b86-4573-866f-74163da166b0","Type":"ContainerDied","Data":"039005b6b3adabbc0bcac6697541099c789176a1a1c5283de06a36e58dfd5384"} Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.120793 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9jhp9" podStartSLOduration=8.120765468 podStartE2EDuration="8.120765468s" podCreationTimestamp="2026-01-31 05:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:14.118862499 +0000 UTC m=+1083.067684174" watchObservedRunningTime="2026-01-31 05:01:14.120765468 +0000 UTC m=+1083.069587153" Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.123450 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.165620 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.258338 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzbhc\" (UniqueName: \"kubernetes.io/projected/86aee0b2-18df-495b-b007-a8b8310af66a-kube-api-access-gzbhc\") pod \"86aee0b2-18df-495b-b007-a8b8310af66a\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.264266 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-dns-svc\") pod \"86aee0b2-18df-495b-b007-a8b8310af66a\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.264304 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-ovsdbserver-sb\") pod \"86aee0b2-18df-495b-b007-a8b8310af66a\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.264370 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-config\") pod \"86aee0b2-18df-495b-b007-a8b8310af66a\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.264544 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-dns-swift-storage-0\") pod \"86aee0b2-18df-495b-b007-a8b8310af66a\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.264652 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-ovsdbserver-nb\") pod \"86aee0b2-18df-495b-b007-a8b8310af66a\" (UID: \"86aee0b2-18df-495b-b007-a8b8310af66a\") " Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.265606 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86aee0b2-18df-495b-b007-a8b8310af66a-kube-api-access-gzbhc" (OuterVolumeSpecName: "kube-api-access-gzbhc") pod "86aee0b2-18df-495b-b007-a8b8310af66a" (UID: "86aee0b2-18df-495b-b007-a8b8310af66a"). InnerVolumeSpecName "kube-api-access-gzbhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.297203 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "86aee0b2-18df-495b-b007-a8b8310af66a" (UID: "86aee0b2-18df-495b-b007-a8b8310af66a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.304475 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "86aee0b2-18df-495b-b007-a8b8310af66a" (UID: "86aee0b2-18df-495b-b007-a8b8310af66a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.314447 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-config" (OuterVolumeSpecName: "config") pod "86aee0b2-18df-495b-b007-a8b8310af66a" (UID: "86aee0b2-18df-495b-b007-a8b8310af66a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.316349 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "86aee0b2-18df-495b-b007-a8b8310af66a" (UID: "86aee0b2-18df-495b-b007-a8b8310af66a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.318289 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "86aee0b2-18df-495b-b007-a8b8310af66a" (UID: "86aee0b2-18df-495b-b007-a8b8310af66a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.368014 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzbhc\" (UniqueName: \"kubernetes.io/projected/86aee0b2-18df-495b-b007-a8b8310af66a-kube-api-access-gzbhc\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.368505 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.368518 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.368530 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.368541 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:14 crc kubenswrapper[4832]: I0131 05:01:14.368553 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86aee0b2-18df-495b-b007-a8b8310af66a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.094869 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" event={"ID":"45635b27-9b86-4573-866f-74163da166b0","Type":"ContainerStarted","Data":"9517948a835a5ad3f9dd42f45078f212904c0516d7f439ff50dacf8e4ba5aa25"} Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.095224 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.099331 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" event={"ID":"86aee0b2-18df-495b-b007-a8b8310af66a","Type":"ContainerDied","Data":"701671c68f9fed6b5e542f3c5bfa7f03b955080b99a98a85649900a033a8ae9e"} Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.099511 4832 scope.go:117] "RemoveContainer" containerID="e7fd43e61d744dac1d2a7b985ec9197fff266b8a854947af4bd3942703fffd55" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.099910 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-wsc2h" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.146829 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" podStartSLOduration=9.146747363 podStartE2EDuration="9.146747363s" podCreationTimestamp="2026-01-31 05:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:15.121691684 +0000 UTC m=+1084.070513369" watchObservedRunningTime="2026-01-31 05:01:15.146747363 +0000 UTC m=+1084.095569088" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.237218 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-wsc2h"] Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.246157 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-wsc2h"] Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.607827 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f78656df-w96px"] Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.662904 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7fd59dbb48-vjkkx"] Jan 31 05:01:15 crc kubenswrapper[4832]: E0131 05:01:15.663757 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86aee0b2-18df-495b-b007-a8b8310af66a" containerName="init" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.663778 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="86aee0b2-18df-495b-b007-a8b8310af66a" containerName="init" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.663999 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="86aee0b2-18df-495b-b007-a8b8310af66a" containerName="init" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.665215 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.671955 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.680160 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7fd59dbb48-vjkkx"] Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.744175 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b849fc549-b2htl"] Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.771270 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f6b9f547b-mrjcq"] Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.773256 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.802841 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6b9f547b-mrjcq"] Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.814415 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glvj4\" (UniqueName: \"kubernetes.io/projected/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-kube-api-access-glvj4\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.814550 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-combined-ca-bundle\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.814606 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-horizon-tls-certs\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.814641 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-horizon-secret-key\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.814662 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-logs\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.814726 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-scripts\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.814761 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-config-data\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.872936 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86aee0b2-18df-495b-b007-a8b8310af66a" path="/var/lib/kubelet/pods/86aee0b2-18df-495b-b007-a8b8310af66a/volumes" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.916874 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/769ea643-f342-413c-a719-7c65e086b9eb-horizon-tls-certs\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.916947 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glvj4\" (UniqueName: \"kubernetes.io/projected/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-kube-api-access-glvj4\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.917205 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-combined-ca-bundle\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.917386 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-horizon-tls-certs\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.917413 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-horizon-secret-key\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.917443 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-logs\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.917604 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/769ea643-f342-413c-a719-7c65e086b9eb-logs\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.917728 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/769ea643-f342-413c-a719-7c65e086b9eb-combined-ca-bundle\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.917781 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/769ea643-f342-413c-a719-7c65e086b9eb-horizon-secret-key\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.917822 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-scripts\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.917853 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/769ea643-f342-413c-a719-7c65e086b9eb-scripts\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.917901 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/769ea643-f342-413c-a719-7c65e086b9eb-config-data\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.919158 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-config-data\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:15 crc kubenswrapper[4832]: I0131 05:01:15.919266 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nrzp\" (UniqueName: \"kubernetes.io/projected/769ea643-f342-413c-a719-7c65e086b9eb-kube-api-access-2nrzp\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.021161 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/769ea643-f342-413c-a719-7c65e086b9eb-combined-ca-bundle\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.021536 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/769ea643-f342-413c-a719-7c65e086b9eb-horizon-secret-key\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.021584 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/769ea643-f342-413c-a719-7c65e086b9eb-scripts\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.021613 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/769ea643-f342-413c-a719-7c65e086b9eb-config-data\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.021650 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nrzp\" (UniqueName: \"kubernetes.io/projected/769ea643-f342-413c-a719-7c65e086b9eb-kube-api-access-2nrzp\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.021689 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/769ea643-f342-413c-a719-7c65e086b9eb-horizon-tls-certs\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.021825 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/769ea643-f342-413c-a719-7c65e086b9eb-logs\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.058937 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-horizon-secret-key\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.059003 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/769ea643-f342-413c-a719-7c65e086b9eb-scripts\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.059050 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-horizon-tls-certs\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.061743 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/769ea643-f342-413c-a719-7c65e086b9eb-logs\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.062250 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-logs\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.063014 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-scripts\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.063233 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-config-data\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.063880 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-combined-ca-bundle\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.063882 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/769ea643-f342-413c-a719-7c65e086b9eb-config-data\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.064681 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/769ea643-f342-413c-a719-7c65e086b9eb-horizon-secret-key\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.065234 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/769ea643-f342-413c-a719-7c65e086b9eb-horizon-tls-certs\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.065513 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nrzp\" (UniqueName: \"kubernetes.io/projected/769ea643-f342-413c-a719-7c65e086b9eb-kube-api-access-2nrzp\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.066434 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/769ea643-f342-413c-a719-7c65e086b9eb-combined-ca-bundle\") pod \"horizon-6f6b9f547b-mrjcq\" (UID: \"769ea643-f342-413c-a719-7c65e086b9eb\") " pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.066920 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glvj4\" (UniqueName: \"kubernetes.io/projected/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-kube-api-access-glvj4\") pod \"horizon-7fd59dbb48-vjkkx\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.102041 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.124744 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6f5548a4-e629-49b4-b3d8-cc6030c09439","Type":"ContainerStarted","Data":"53ad3a7440bfe2e260623c56fc1fc1880e0cb038ed23fbbdcb6f3ba0100fbb4e"} Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.125056 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6f5548a4-e629-49b4-b3d8-cc6030c09439" containerName="glance-log" containerID="cri-o://a95cdf00525a4f9548bfaf29ef5789ff38ec56f3f4d0514070eefc2d2fb46d0c" gracePeriod=30 Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.125641 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6f5548a4-e629-49b4-b3d8-cc6030c09439" containerName="glance-httpd" containerID="cri-o://53ad3a7440bfe2e260623c56fc1fc1880e0cb038ed23fbbdcb6f3ba0100fbb4e" gracePeriod=30 Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.139080 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8","Type":"ContainerStarted","Data":"afceb362351186c114a3dd6b4f3851ad62a7cbc51599240dce33585f3b73c6df"} Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.139241 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" containerName="glance-log" containerID="cri-o://c07a5fbc0dfcf56726139259c1d5555a37020e17397be1fd170417d79f5e8cac" gracePeriod=30 Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.139282 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" containerName="glance-httpd" containerID="cri-o://afceb362351186c114a3dd6b4f3851ad62a7cbc51599240dce33585f3b73c6df" gracePeriod=30 Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.209974 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.209952526 podStartE2EDuration="10.209952526s" podCreationTimestamp="2026-01-31 05:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:16.178910751 +0000 UTC m=+1085.127732446" watchObservedRunningTime="2026-01-31 05:01:16.209952526 +0000 UTC m=+1085.158774211" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.212485 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.212474755 podStartE2EDuration="10.212474755s" podCreationTimestamp="2026-01-31 05:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:16.207247912 +0000 UTC m=+1085.156069597" watchObservedRunningTime="2026-01-31 05:01:16.212474755 +0000 UTC m=+1085.161296440" Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.302233 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:16 crc kubenswrapper[4832]: W0131 05:01:16.551455 4832 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a4f64b_1243_4aec_8ef6_1bd74b2db3f8.slice/crio-c07a5fbc0dfcf56726139259c1d5555a37020e17397be1fd170417d79f5e8cac.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a4f64b_1243_4aec_8ef6_1bd74b2db3f8.slice/crio-c07a5fbc0dfcf56726139259c1d5555a37020e17397be1fd170417d79f5e8cac.scope: no such file or directory Jan 31 05:01:16 crc kubenswrapper[4832]: W0131 05:01:16.560262 4832 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f5548a4_e629_49b4_b3d8_cc6030c09439.slice/crio-conmon-53ad3a7440bfe2e260623c56fc1fc1880e0cb038ed23fbbdcb6f3ba0100fbb4e.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f5548a4_e629_49b4_b3d8_cc6030c09439.slice/crio-conmon-53ad3a7440bfe2e260623c56fc1fc1880e0cb038ed23fbbdcb6f3ba0100fbb4e.scope: no such file or directory Jan 31 05:01:16 crc kubenswrapper[4832]: W0131 05:01:16.560321 4832 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f5548a4_e629_49b4_b3d8_cc6030c09439.slice/crio-53ad3a7440bfe2e260623c56fc1fc1880e0cb038ed23fbbdcb6f3ba0100fbb4e.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f5548a4_e629_49b4_b3d8_cc6030c09439.slice/crio-53ad3a7440bfe2e260623c56fc1fc1880e0cb038ed23fbbdcb6f3ba0100fbb4e.scope: no such file or directory Jan 31 05:01:16 crc kubenswrapper[4832]: W0131 05:01:16.560348 4832 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a4f64b_1243_4aec_8ef6_1bd74b2db3f8.slice/crio-conmon-afceb362351186c114a3dd6b4f3851ad62a7cbc51599240dce33585f3b73c6df.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a4f64b_1243_4aec_8ef6_1bd74b2db3f8.slice/crio-conmon-afceb362351186c114a3dd6b4f3851ad62a7cbc51599240dce33585f3b73c6df.scope: no such file or directory Jan 31 05:01:16 crc kubenswrapper[4832]: W0131 05:01:16.560379 4832 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a4f64b_1243_4aec_8ef6_1bd74b2db3f8.slice/crio-afceb362351186c114a3dd6b4f3851ad62a7cbc51599240dce33585f3b73c6df.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a4f64b_1243_4aec_8ef6_1bd74b2db3f8.slice/crio-afceb362351186c114a3dd6b4f3851ad62a7cbc51599240dce33585f3b73c6df.scope: no such file or directory Jan 31 05:01:16 crc kubenswrapper[4832]: I0131 05:01:16.840913 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6b9f547b-mrjcq"] Jan 31 05:01:17 crc kubenswrapper[4832]: I0131 05:01:17.109324 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7fd59dbb48-vjkkx"] Jan 31 05:01:17 crc kubenswrapper[4832]: I0131 05:01:17.170494 4832 generic.go:334] "Generic (PLEG): container finished" podID="63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" containerID="afceb362351186c114a3dd6b4f3851ad62a7cbc51599240dce33585f3b73c6df" exitCode=0 Jan 31 05:01:17 crc kubenswrapper[4832]: I0131 05:01:17.170529 4832 generic.go:334] "Generic (PLEG): container finished" podID="63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" containerID="c07a5fbc0dfcf56726139259c1d5555a37020e17397be1fd170417d79f5e8cac" exitCode=143 Jan 31 05:01:17 crc kubenswrapper[4832]: I0131 05:01:17.170585 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8","Type":"ContainerDied","Data":"afceb362351186c114a3dd6b4f3851ad62a7cbc51599240dce33585f3b73c6df"} Jan 31 05:01:17 crc kubenswrapper[4832]: I0131 05:01:17.170617 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8","Type":"ContainerDied","Data":"c07a5fbc0dfcf56726139259c1d5555a37020e17397be1fd170417d79f5e8cac"} Jan 31 05:01:17 crc kubenswrapper[4832]: I0131 05:01:17.174532 4832 generic.go:334] "Generic (PLEG): container finished" podID="c3a3c066-3065-4b65-9b5e-17ddb432a9aa" containerID="0e658aae6c64f67ba15cb2a91784051f659657d0a0478e2b587965e0acf8bd43" exitCode=137 Jan 31 05:01:17 crc kubenswrapper[4832]: I0131 05:01:17.174712 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" event={"ID":"c3a3c066-3065-4b65-9b5e-17ddb432a9aa","Type":"ContainerDied","Data":"0e658aae6c64f67ba15cb2a91784051f659657d0a0478e2b587965e0acf8bd43"} Jan 31 05:01:17 crc kubenswrapper[4832]: I0131 05:01:17.177786 4832 generic.go:334] "Generic (PLEG): container finished" podID="6f5548a4-e629-49b4-b3d8-cc6030c09439" containerID="53ad3a7440bfe2e260623c56fc1fc1880e0cb038ed23fbbdcb6f3ba0100fbb4e" exitCode=0 Jan 31 05:01:17 crc kubenswrapper[4832]: I0131 05:01:17.177814 4832 generic.go:334] "Generic (PLEG): container finished" podID="6f5548a4-e629-49b4-b3d8-cc6030c09439" containerID="a95cdf00525a4f9548bfaf29ef5789ff38ec56f3f4d0514070eefc2d2fb46d0c" exitCode=143 Jan 31 05:01:17 crc kubenswrapper[4832]: I0131 05:01:17.177816 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6f5548a4-e629-49b4-b3d8-cc6030c09439","Type":"ContainerDied","Data":"53ad3a7440bfe2e260623c56fc1fc1880e0cb038ed23fbbdcb6f3ba0100fbb4e"} Jan 31 05:01:17 crc kubenswrapper[4832]: I0131 05:01:17.177845 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6f5548a4-e629-49b4-b3d8-cc6030c09439","Type":"ContainerDied","Data":"a95cdf00525a4f9548bfaf29ef5789ff38ec56f3f4d0514070eefc2d2fb46d0c"} Jan 31 05:01:18 crc kubenswrapper[4832]: I0131 05:01:18.190338 4832 generic.go:334] "Generic (PLEG): container finished" podID="65faf5b4-19a7-48d8-810f-04b1e09275dc" containerID="907d52516810dd9efc0434f02fd48f0280240b66254f860221c35ac7d4bd2bc5" exitCode=0 Jan 31 05:01:18 crc kubenswrapper[4832]: I0131 05:01:18.190490 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2g82r" event={"ID":"65faf5b4-19a7-48d8-810f-04b1e09275dc","Type":"ContainerDied","Data":"907d52516810dd9efc0434f02fd48f0280240b66254f860221c35ac7d4bd2bc5"} Jan 31 05:01:18 crc kubenswrapper[4832]: I0131 05:01:18.540723 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:01:18 crc kubenswrapper[4832]: I0131 05:01:18.540806 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:01:19 crc kubenswrapper[4832]: W0131 05:01:19.043607 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod769ea643_f342_413c_a719_7c65e086b9eb.slice/crio-ca8a6a784bcfc4730e942d487e9953c1d1bff802a92755c07137214658dc030e WatchSource:0}: Error finding container ca8a6a784bcfc4730e942d487e9953c1d1bff802a92755c07137214658dc030e: Status 404 returned error can't find the container with id ca8a6a784bcfc4730e942d487e9953c1d1bff802a92755c07137214658dc030e Jan 31 05:01:19 crc kubenswrapper[4832]: W0131 05:01:19.058684 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02f959e1_19ff_4f88_927b_ef2d3ee6d87e.slice/crio-5df374a55811feed9339ff12019f8e4b13eeaeb640d5e576b238b02daa2875fe WatchSource:0}: Error finding container 5df374a55811feed9339ff12019f8e4b13eeaeb640d5e576b238b02daa2875fe: Status 404 returned error can't find the container with id 5df374a55811feed9339ff12019f8e4b13eeaeb640d5e576b238b02daa2875fe Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.117543 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" podUID="c3a3c066-3065-4b65-9b5e-17ddb432a9aa" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: connect: connection refused" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.153552 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.158328 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.193786 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-public-tls-certs\") pod \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.195988 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-config-data\") pod \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.196070 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbp7s\" (UniqueName: \"kubernetes.io/projected/6f5548a4-e629-49b4-b3d8-cc6030c09439-kube-api-access-xbp7s\") pod \"6f5548a4-e629-49b4-b3d8-cc6030c09439\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.196108 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-scripts\") pod \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.196131 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-config-data\") pod \"6f5548a4-e629-49b4-b3d8-cc6030c09439\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.196155 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87m64\" (UniqueName: \"kubernetes.io/projected/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-kube-api-access-87m64\") pod \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.196199 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-logs\") pod \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.196222 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-internal-tls-certs\") pod \"6f5548a4-e629-49b4-b3d8-cc6030c09439\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.196248 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-combined-ca-bundle\") pod \"6f5548a4-e629-49b4-b3d8-cc6030c09439\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.196285 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-combined-ca-bundle\") pod \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.196353 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"6f5548a4-e629-49b4-b3d8-cc6030c09439\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.196416 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f5548a4-e629-49b4-b3d8-cc6030c09439-httpd-run\") pod \"6f5548a4-e629-49b4-b3d8-cc6030c09439\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.196435 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f5548a4-e629-49b4-b3d8-cc6030c09439-logs\") pod \"6f5548a4-e629-49b4-b3d8-cc6030c09439\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.196477 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.196545 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-scripts\") pod \"6f5548a4-e629-49b4-b3d8-cc6030c09439\" (UID: \"6f5548a4-e629-49b4-b3d8-cc6030c09439\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.196584 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-httpd-run\") pod \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\" (UID: \"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8\") " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.199661 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" (UID: "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.200147 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5548a4-e629-49b4-b3d8-cc6030c09439-logs" (OuterVolumeSpecName: "logs") pod "6f5548a4-e629-49b4-b3d8-cc6030c09439" (UID: "6f5548a4-e629-49b4-b3d8-cc6030c09439"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.200354 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5548a4-e629-49b4-b3d8-cc6030c09439-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6f5548a4-e629-49b4-b3d8-cc6030c09439" (UID: "6f5548a4-e629-49b4-b3d8-cc6030c09439"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.201164 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-logs" (OuterVolumeSpecName: "logs") pod "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" (UID: "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.223537 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-scripts" (OuterVolumeSpecName: "scripts") pod "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" (UID: "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.224497 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5548a4-e629-49b4-b3d8-cc6030c09439-kube-api-access-xbp7s" (OuterVolumeSpecName: "kube-api-access-xbp7s") pod "6f5548a4-e629-49b4-b3d8-cc6030c09439" (UID: "6f5548a4-e629-49b4-b3d8-cc6030c09439"). InnerVolumeSpecName "kube-api-access-xbp7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.231343 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" (UID: "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.232789 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-scripts" (OuterVolumeSpecName: "scripts") pod "6f5548a4-e629-49b4-b3d8-cc6030c09439" (UID: "6f5548a4-e629-49b4-b3d8-cc6030c09439"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.232835 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "6f5548a4-e629-49b4-b3d8-cc6030c09439" (UID: "6f5548a4-e629-49b4-b3d8-cc6030c09439"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.246149 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6b9f547b-mrjcq" event={"ID":"769ea643-f342-413c-a719-7c65e086b9eb","Type":"ContainerStarted","Data":"ca8a6a784bcfc4730e942d487e9953c1d1bff802a92755c07137214658dc030e"} Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.248522 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-kube-api-access-87m64" (OuterVolumeSpecName: "kube-api-access-87m64") pod "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" (UID: "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8"). InnerVolumeSpecName "kube-api-access-87m64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.255229 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6f5548a4-e629-49b4-b3d8-cc6030c09439","Type":"ContainerDied","Data":"fd57cbb18708eade0b04172180d7381e460d48e5a30b3bf0e5d173254687bc1f"} Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.255705 4832 scope.go:117] "RemoveContainer" containerID="53ad3a7440bfe2e260623c56fc1fc1880e0cb038ed23fbbdcb6f3ba0100fbb4e" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.255903 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.259909 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f5548a4-e629-49b4-b3d8-cc6030c09439" (UID: "6f5548a4-e629-49b4-b3d8-cc6030c09439"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.282284 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.282871 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"63a4f64b-1243-4aec-8ef6-1bd74b2db3f8","Type":"ContainerDied","Data":"eafed0f7ad0184989a64a92502e903812eeb31c4f39d507e92ad7de43174bb6e"} Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.283295 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" (UID: "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.295847 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fd59dbb48-vjkkx" event={"ID":"02f959e1-19ff-4f88-927b-ef2d3ee6d87e","Type":"ContainerStarted","Data":"5df374a55811feed9339ff12019f8e4b13eeaeb640d5e576b238b02daa2875fe"} Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.300588 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.300618 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.300627 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.300637 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbp7s\" (UniqueName: \"kubernetes.io/projected/6f5548a4-e629-49b4-b3d8-cc6030c09439-kube-api-access-xbp7s\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.300647 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87m64\" (UniqueName: \"kubernetes.io/projected/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-kube-api-access-87m64\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.300656 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.300665 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.300675 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.300708 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.300718 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f5548a4-e629-49b4-b3d8-cc6030c09439-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.300728 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f5548a4-e629-49b4-b3d8-cc6030c09439-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.300743 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.332388 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-config-data" (OuterVolumeSpecName: "config-data") pod "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" (UID: "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.332432 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.340235 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.350334 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6f5548a4-e629-49b4-b3d8-cc6030c09439" (UID: "6f5548a4-e629-49b4-b3d8-cc6030c09439"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.350544 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-config-data" (OuterVolumeSpecName: "config-data") pod "6f5548a4-e629-49b4-b3d8-cc6030c09439" (UID: "6f5548a4-e629-49b4-b3d8-cc6030c09439"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.402608 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.402650 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f5548a4-e629-49b4-b3d8-cc6030c09439-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.402669 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.402680 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.402692 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.409183 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" (UID: "63a4f64b-1243-4aec-8ef6-1bd74b2db3f8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.504342 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.615641 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.645216 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.676656 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.726886 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.761674 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 05:01:19 crc kubenswrapper[4832]: E0131 05:01:19.762293 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5548a4-e629-49b4-b3d8-cc6030c09439" containerName="glance-httpd" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.762314 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5548a4-e629-49b4-b3d8-cc6030c09439" containerName="glance-httpd" Jan 31 05:01:19 crc kubenswrapper[4832]: E0131 05:01:19.762358 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" containerName="glance-log" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.762367 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" containerName="glance-log" Jan 31 05:01:19 crc kubenswrapper[4832]: E0131 05:01:19.762382 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5548a4-e629-49b4-b3d8-cc6030c09439" containerName="glance-log" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.762389 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5548a4-e629-49b4-b3d8-cc6030c09439" containerName="glance-log" Jan 31 05:01:19 crc kubenswrapper[4832]: E0131 05:01:19.762405 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" containerName="glance-httpd" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.762411 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" containerName="glance-httpd" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.762601 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5548a4-e629-49b4-b3d8-cc6030c09439" containerName="glance-httpd" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.762619 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" containerName="glance-log" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.762629 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5548a4-e629-49b4-b3d8-cc6030c09439" containerName="glance-log" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.762644 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" containerName="glance-httpd" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.763798 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.767405 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.767524 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.768715 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pnlwb" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.769114 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.795740 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.807733 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.810618 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.812186 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.812728 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.814089 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-947sc\" (UniqueName: \"kubernetes.io/projected/a5e58db6-3021-4917-8f51-be18dd5bb77e-kube-api-access-947sc\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.814189 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.814227 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5e58db6-3021-4917-8f51-be18dd5bb77e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.814260 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5e58db6-3021-4917-8f51-be18dd5bb77e-logs\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.814290 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.814471 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.814963 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.815910 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.820452 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.878992 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63a4f64b-1243-4aec-8ef6-1bd74b2db3f8" path="/var/lib/kubelet/pods/63a4f64b-1243-4aec-8ef6-1bd74b2db3f8/volumes" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.880299 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f5548a4-e629-49b4-b3d8-cc6030c09439" path="/var/lib/kubelet/pods/6f5548a4-e629-49b4-b3d8-cc6030c09439/volumes" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.918723 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-947sc\" (UniqueName: \"kubernetes.io/projected/a5e58db6-3021-4917-8f51-be18dd5bb77e-kube-api-access-947sc\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.918850 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-config-data\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.918879 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3186400-0bbc-4891-929e-471a0c30b648-logs\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.918930 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.918985 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5e58db6-3021-4917-8f51-be18dd5bb77e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.919024 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5e58db6-3021-4917-8f51-be18dd5bb77e-logs\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.919055 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.919269 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6qjf\" (UniqueName: \"kubernetes.io/projected/a3186400-0bbc-4891-929e-471a0c30b648-kube-api-access-r6qjf\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.919576 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-scripts\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.919710 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.919777 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.919837 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.920035 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.920101 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a3186400-0bbc-4891-929e-471a0c30b648-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.920162 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.920204 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.920591 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5e58db6-3021-4917-8f51-be18dd5bb77e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.920644 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.921987 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5e58db6-3021-4917-8f51-be18dd5bb77e-logs\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.924128 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.926177 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.928057 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.930718 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.939504 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-947sc\" (UniqueName: \"kubernetes.io/projected/a5e58db6-3021-4917-8f51-be18dd5bb77e-kube-api-access-947sc\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:19 crc kubenswrapper[4832]: I0131 05:01:19.964751 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.022014 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-scripts\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.022082 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.022128 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.022217 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a3186400-0bbc-4891-929e-471a0c30b648-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.022257 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.022304 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-config-data\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.022327 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3186400-0bbc-4891-929e-471a0c30b648-logs\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.022391 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6qjf\" (UniqueName: \"kubernetes.io/projected/a3186400-0bbc-4891-929e-471a0c30b648-kube-api-access-r6qjf\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.022681 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.023164 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3186400-0bbc-4891-929e-471a0c30b648-logs\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.023295 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a3186400-0bbc-4891-929e-471a0c30b648-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.027980 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-scripts\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.028841 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.029469 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.031357 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-config-data\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.045533 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6qjf\" (UniqueName: \"kubernetes.io/projected/a3186400-0bbc-4891-929e-471a0c30b648-kube-api-access-r6qjf\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.065547 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " pod="openstack/glance-default-external-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.092386 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:20 crc kubenswrapper[4832]: I0131 05:01:20.141274 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 05:01:22 crc kubenswrapper[4832]: I0131 05:01:22.354713 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:22 crc kubenswrapper[4832]: I0131 05:01:22.460711 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8lwb5"] Jan 31 05:01:22 crc kubenswrapper[4832]: I0131 05:01:22.461378 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" podUID="39e727aa-9180-4ebf-af96-20abf1d96bea" containerName="dnsmasq-dns" containerID="cri-o://e2f878d85001e88b170f1a3b82b187a445e5a69df7d135da3d7d9830228130ea" gracePeriod=10 Jan 31 05:01:23 crc kubenswrapper[4832]: I0131 05:01:23.216342 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" podUID="39e727aa-9180-4ebf-af96-20abf1d96bea" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 31 05:01:23 crc kubenswrapper[4832]: I0131 05:01:23.344409 4832 generic.go:334] "Generic (PLEG): container finished" podID="39e727aa-9180-4ebf-af96-20abf1d96bea" containerID="e2f878d85001e88b170f1a3b82b187a445e5a69df7d135da3d7d9830228130ea" exitCode=0 Jan 31 05:01:23 crc kubenswrapper[4832]: I0131 05:01:23.344463 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" event={"ID":"39e727aa-9180-4ebf-af96-20abf1d96bea","Type":"ContainerDied","Data":"e2f878d85001e88b170f1a3b82b187a445e5a69df7d135da3d7d9830228130ea"} Jan 31 05:01:28 crc kubenswrapper[4832]: I0131 05:01:28.216814 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" podUID="39e727aa-9180-4ebf-af96-20abf1d96bea" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 31 05:01:29 crc kubenswrapper[4832]: I0131 05:01:29.117553 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" podUID="c3a3c066-3065-4b65-9b5e-17ddb432a9aa" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 31 05:01:33 crc kubenswrapper[4832]: I0131 05:01:33.216605 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" podUID="39e727aa-9180-4ebf-af96-20abf1d96bea" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 31 05:01:33 crc kubenswrapper[4832]: I0131 05:01:33.216749 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:01:34 crc kubenswrapper[4832]: I0131 05:01:34.118429 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" podUID="c3a3c066-3065-4b65-9b5e-17ddb432a9aa" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 31 05:01:34 crc kubenswrapper[4832]: I0131 05:01:34.119252 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:36 crc kubenswrapper[4832]: E0131 05:01:36.398170 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 05:01:36 crc kubenswrapper[4832]: E0131 05:01:36.398387 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n646h599hc9h66dh576h646h696hc6h58chb4h5dchfbh54h4h5d7h577h54ch655h684h555h69h575h66bh646h69h68dh5f9h55bh8ch89hbhfdq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65c55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-f78656df-w96px_openstack(58cb7047-9e1e-46d8-82b4-59a1af9d0937): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 05:01:36 crc kubenswrapper[4832]: E0131 05:01:36.400970 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-f78656df-w96px" podUID="58cb7047-9e1e-46d8-82b4-59a1af9d0937" Jan 31 05:01:38 crc kubenswrapper[4832]: I0131 05:01:38.215822 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" podUID="39e727aa-9180-4ebf-af96-20abf1d96bea" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 31 05:01:38 crc kubenswrapper[4832]: I0131 05:01:38.515098 4832 generic.go:334] "Generic (PLEG): container finished" podID="14c30239-67eb-44a5-83cc-dbec561dade8" containerID="00a94f0854a8de1520bd9fd0bc51391272f8127faefbdd237bf99066c50d75f6" exitCode=0 Jan 31 05:01:38 crc kubenswrapper[4832]: I0131 05:01:38.515162 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9jhp9" event={"ID":"14c30239-67eb-44a5-83cc-dbec561dade8","Type":"ContainerDied","Data":"00a94f0854a8de1520bd9fd0bc51391272f8127faefbdd237bf99066c50d75f6"} Jan 31 05:01:38 crc kubenswrapper[4832]: E0131 05:01:38.591769 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 05:01:38 crc kubenswrapper[4832]: E0131 05:01:38.592279 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65h579h66fh84h678hcbh66h5c6hc7h69h9fh58bh5b5h5dbhb7h576h5d4h65bh5h66ch5h598h9ch5b4hf8h66ch79hc9h6h5c8h5c6h59cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqpjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-596b8df6f7-stcv6_openstack(578c93b1-9bb7-4d11-9635-14805f3f91e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 05:01:38 crc kubenswrapper[4832]: E0131 05:01:38.597339 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-596b8df6f7-stcv6" podUID="578c93b1-9bb7-4d11-9635-14805f3f91e1" Jan 31 05:01:39 crc kubenswrapper[4832]: I0131 05:01:39.119951 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" podUID="c3a3c066-3065-4b65-9b5e-17ddb432a9aa" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.490850 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.495506 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.545582 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2g82r" event={"ID":"65faf5b4-19a7-48d8-810f-04b1e09275dc","Type":"ContainerDied","Data":"9507f918457917c86870d880bcae5b8fb59f68b2723785a16e8d9344b4fba65f"} Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.545632 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9507f918457917c86870d880bcae5b8fb59f68b2723785a16e8d9344b4fba65f" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.545599 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2g82r" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.562686 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" event={"ID":"c3a3c066-3065-4b65-9b5e-17ddb432a9aa","Type":"ContainerDied","Data":"91a87b00a6836728eae0697e7507f3fa077e23cfd60f299bde1e97c20c424b53"} Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.562855 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.636258 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2zqs\" (UniqueName: \"kubernetes.io/projected/65faf5b4-19a7-48d8-810f-04b1e09275dc-kube-api-access-j2zqs\") pod \"65faf5b4-19a7-48d8-810f-04b1e09275dc\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.636316 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-fernet-keys\") pod \"65faf5b4-19a7-48d8-810f-04b1e09275dc\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.636353 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-ovsdbserver-sb\") pod \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.636415 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-credential-keys\") pod \"65faf5b4-19a7-48d8-810f-04b1e09275dc\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.636445 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-dns-swift-storage-0\") pod \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.636606 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-combined-ca-bundle\") pod \"65faf5b4-19a7-48d8-810f-04b1e09275dc\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.636654 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-dns-svc\") pod \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.636694 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-config-data\") pod \"65faf5b4-19a7-48d8-810f-04b1e09275dc\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.636716 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-scripts\") pod \"65faf5b4-19a7-48d8-810f-04b1e09275dc\" (UID: \"65faf5b4-19a7-48d8-810f-04b1e09275dc\") " Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.636747 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-config\") pod \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.636782 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-ovsdbserver-nb\") pod \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.636879 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27mhw\" (UniqueName: \"kubernetes.io/projected/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-kube-api-access-27mhw\") pod \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\" (UID: \"c3a3c066-3065-4b65-9b5e-17ddb432a9aa\") " Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.649171 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "65faf5b4-19a7-48d8-810f-04b1e09275dc" (UID: "65faf5b4-19a7-48d8-810f-04b1e09275dc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.650362 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "65faf5b4-19a7-48d8-810f-04b1e09275dc" (UID: "65faf5b4-19a7-48d8-810f-04b1e09275dc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.660331 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-kube-api-access-27mhw" (OuterVolumeSpecName: "kube-api-access-27mhw") pod "c3a3c066-3065-4b65-9b5e-17ddb432a9aa" (UID: "c3a3c066-3065-4b65-9b5e-17ddb432a9aa"). InnerVolumeSpecName "kube-api-access-27mhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.663312 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65faf5b4-19a7-48d8-810f-04b1e09275dc-kube-api-access-j2zqs" (OuterVolumeSpecName: "kube-api-access-j2zqs") pod "65faf5b4-19a7-48d8-810f-04b1e09275dc" (UID: "65faf5b4-19a7-48d8-810f-04b1e09275dc"). InnerVolumeSpecName "kube-api-access-j2zqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.671130 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-scripts" (OuterVolumeSpecName: "scripts") pod "65faf5b4-19a7-48d8-810f-04b1e09275dc" (UID: "65faf5b4-19a7-48d8-810f-04b1e09275dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.672967 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65faf5b4-19a7-48d8-810f-04b1e09275dc" (UID: "65faf5b4-19a7-48d8-810f-04b1e09275dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.692067 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-config-data" (OuterVolumeSpecName: "config-data") pod "65faf5b4-19a7-48d8-810f-04b1e09275dc" (UID: "65faf5b4-19a7-48d8-810f-04b1e09275dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.700910 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3a3c066-3065-4b65-9b5e-17ddb432a9aa" (UID: "c3a3c066-3065-4b65-9b5e-17ddb432a9aa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.708024 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3a3c066-3065-4b65-9b5e-17ddb432a9aa" (UID: "c3a3c066-3065-4b65-9b5e-17ddb432a9aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.715677 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c3a3c066-3065-4b65-9b5e-17ddb432a9aa" (UID: "c3a3c066-3065-4b65-9b5e-17ddb432a9aa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.721320 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3a3c066-3065-4b65-9b5e-17ddb432a9aa" (UID: "c3a3c066-3065-4b65-9b5e-17ddb432a9aa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.724103 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-config" (OuterVolumeSpecName: "config") pod "c3a3c066-3065-4b65-9b5e-17ddb432a9aa" (UID: "c3a3c066-3065-4b65-9b5e-17ddb432a9aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.738411 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2zqs\" (UniqueName: \"kubernetes.io/projected/65faf5b4-19a7-48d8-810f-04b1e09275dc-kube-api-access-j2zqs\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.738444 4832 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.738458 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.738468 4832 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.738478 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.738488 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.738499 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.738508 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.738516 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65faf5b4-19a7-48d8-810f-04b1e09275dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.738524 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.738532 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.738541 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27mhw\" (UniqueName: \"kubernetes.io/projected/c3a3c066-3065-4b65-9b5e-17ddb432a9aa-kube-api-access-27mhw\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.953625 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-t7kvk"] Jan 31 05:01:40 crc kubenswrapper[4832]: I0131 05:01:40.981666 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-t7kvk"] Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.603114 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2g82r"] Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.611403 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2g82r"] Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.738659 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-r7sxw"] Jan 31 05:01:41 crc kubenswrapper[4832]: E0131 05:01:41.739230 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a3c066-3065-4b65-9b5e-17ddb432a9aa" containerName="init" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.739250 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a3c066-3065-4b65-9b5e-17ddb432a9aa" containerName="init" Jan 31 05:01:41 crc kubenswrapper[4832]: E0131 05:01:41.739283 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a3c066-3065-4b65-9b5e-17ddb432a9aa" containerName="dnsmasq-dns" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.739290 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a3c066-3065-4b65-9b5e-17ddb432a9aa" containerName="dnsmasq-dns" Jan 31 05:01:41 crc kubenswrapper[4832]: E0131 05:01:41.739303 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65faf5b4-19a7-48d8-810f-04b1e09275dc" containerName="keystone-bootstrap" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.739312 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65faf5b4-19a7-48d8-810f-04b1e09275dc" containerName="keystone-bootstrap" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.739479 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a3c066-3065-4b65-9b5e-17ddb432a9aa" containerName="dnsmasq-dns" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.739492 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="65faf5b4-19a7-48d8-810f-04b1e09275dc" containerName="keystone-bootstrap" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.740209 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.743701 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.743902 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.744707 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7z4jm" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.744844 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.745397 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.746806 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r7sxw"] Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.862028 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-scripts\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.862109 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-config-data\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.862429 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-fernet-keys\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.862493 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-combined-ca-bundle\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.862532 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cckh7\" (UniqueName: \"kubernetes.io/projected/4a775d6b-5610-4b98-a570-8e98c9cadfd2-kube-api-access-cckh7\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.862710 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-credential-keys\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.872930 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65faf5b4-19a7-48d8-810f-04b1e09275dc" path="/var/lib/kubelet/pods/65faf5b4-19a7-48d8-810f-04b1e09275dc/volumes" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.873482 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a3c066-3065-4b65-9b5e-17ddb432a9aa" path="/var/lib/kubelet/pods/c3a3c066-3065-4b65-9b5e-17ddb432a9aa/volumes" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.964870 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-scripts\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.964942 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-config-data\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.965029 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-fernet-keys\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.965059 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-combined-ca-bundle\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.965082 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cckh7\" (UniqueName: \"kubernetes.io/projected/4a775d6b-5610-4b98-a570-8e98c9cadfd2-kube-api-access-cckh7\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.965134 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-credential-keys\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.970967 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-credential-keys\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.973390 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-fernet-keys\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.973906 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-scripts\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.977372 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-combined-ca-bundle\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.988870 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-config-data\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:41 crc kubenswrapper[4832]: I0131 05:01:41.997192 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cckh7\" (UniqueName: \"kubernetes.io/projected/4a775d6b-5610-4b98-a570-8e98c9cadfd2-kube-api-access-cckh7\") pod \"keystone-bootstrap-r7sxw\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:42 crc kubenswrapper[4832]: E0131 05:01:42.045663 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 31 05:01:42 crc kubenswrapper[4832]: E0131 05:01:42.045879 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsdgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7r8nc_openstack(bcc76655-d4cd-47c7-be0c-21e52514fe92): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 05:01:42 crc kubenswrapper[4832]: E0131 05:01:42.047103 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7r8nc" podUID="bcc76655-d4cd-47c7-be0c-21e52514fe92" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.072915 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.075205 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.286998 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58cb7047-9e1e-46d8-82b4-59a1af9d0937-scripts\") pod \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.287240 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58cb7047-9e1e-46d8-82b4-59a1af9d0937-config-data\") pod \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.287346 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65c55\" (UniqueName: \"kubernetes.io/projected/58cb7047-9e1e-46d8-82b4-59a1af9d0937-kube-api-access-65c55\") pod \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.287471 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58cb7047-9e1e-46d8-82b4-59a1af9d0937-logs\") pod \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.287521 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/58cb7047-9e1e-46d8-82b4-59a1af9d0937-horizon-secret-key\") pod \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\" (UID: \"58cb7047-9e1e-46d8-82b4-59a1af9d0937\") " Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.287664 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58cb7047-9e1e-46d8-82b4-59a1af9d0937-scripts" (OuterVolumeSpecName: "scripts") pod "58cb7047-9e1e-46d8-82b4-59a1af9d0937" (UID: "58cb7047-9e1e-46d8-82b4-59a1af9d0937"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.288454 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58cb7047-9e1e-46d8-82b4-59a1af9d0937-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.290355 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58cb7047-9e1e-46d8-82b4-59a1af9d0937-logs" (OuterVolumeSpecName: "logs") pod "58cb7047-9e1e-46d8-82b4-59a1af9d0937" (UID: "58cb7047-9e1e-46d8-82b4-59a1af9d0937"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.289702 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58cb7047-9e1e-46d8-82b4-59a1af9d0937-config-data" (OuterVolumeSpecName: "config-data") pod "58cb7047-9e1e-46d8-82b4-59a1af9d0937" (UID: "58cb7047-9e1e-46d8-82b4-59a1af9d0937"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.293216 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58cb7047-9e1e-46d8-82b4-59a1af9d0937-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "58cb7047-9e1e-46d8-82b4-59a1af9d0937" (UID: "58cb7047-9e1e-46d8-82b4-59a1af9d0937"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.293435 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58cb7047-9e1e-46d8-82b4-59a1af9d0937-kube-api-access-65c55" (OuterVolumeSpecName: "kube-api-access-65c55") pod "58cb7047-9e1e-46d8-82b4-59a1af9d0937" (UID: "58cb7047-9e1e-46d8-82b4-59a1af9d0937"). InnerVolumeSpecName "kube-api-access-65c55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.392627 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58cb7047-9e1e-46d8-82b4-59a1af9d0937-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.392696 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65c55\" (UniqueName: \"kubernetes.io/projected/58cb7047-9e1e-46d8-82b4-59a1af9d0937-kube-api-access-65c55\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.392710 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58cb7047-9e1e-46d8-82b4-59a1af9d0937-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.392724 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/58cb7047-9e1e-46d8-82b4-59a1af9d0937-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:42 crc kubenswrapper[4832]: E0131 05:01:42.567908 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 31 05:01:42 crc kubenswrapper[4832]: E0131 05:01:42.568146 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hl9hm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-z4rhk_openstack(ba1ef32d-8b91-4c1e-b5d1-31a582df6f36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 05:01:42 crc kubenswrapper[4832]: E0131 05:01:42.569303 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-z4rhk" podUID="ba1ef32d-8b91-4c1e-b5d1-31a582df6f36" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.597795 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9jhp9" event={"ID":"14c30239-67eb-44a5-83cc-dbec561dade8","Type":"ContainerDied","Data":"34f4ae3191c55e016626777119bf1920a2f36e92d1b7ac84138f509f99beb2e0"} Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.597908 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34f4ae3191c55e016626777119bf1920a2f36e92d1b7ac84138f509f99beb2e0" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.600172 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-596b8df6f7-stcv6" event={"ID":"578c93b1-9bb7-4d11-9635-14805f3f91e1","Type":"ContainerDied","Data":"bda3ccd34518fdd86452151b869da1d6dfcc2cc3457426b09ab008c3919cf484"} Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.600195 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bda3ccd34518fdd86452151b869da1d6dfcc2cc3457426b09ab008c3919cf484" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.602530 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f78656df-w96px" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.602708 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f78656df-w96px" event={"ID":"58cb7047-9e1e-46d8-82b4-59a1af9d0937","Type":"ContainerDied","Data":"b202175d9d4d8ade7c5cf79f7c849d3ea5adfafc9e6d140a947ca998ab3a19b6"} Jan 31 05:01:42 crc kubenswrapper[4832]: E0131 05:01:42.603847 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-z4rhk" podUID="ba1ef32d-8b91-4c1e-b5d1-31a582df6f36" Jan 31 05:01:42 crc kubenswrapper[4832]: E0131 05:01:42.604702 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7r8nc" podUID="bcc76655-d4cd-47c7-be0c-21e52514fe92" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.651761 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.660816 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9jhp9" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.733626 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f78656df-w96px"] Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.745577 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f78656df-w96px"] Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.799954 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c30239-67eb-44a5-83cc-dbec561dade8-combined-ca-bundle\") pod \"14c30239-67eb-44a5-83cc-dbec561dade8\" (UID: \"14c30239-67eb-44a5-83cc-dbec561dade8\") " Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.800096 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/578c93b1-9bb7-4d11-9635-14805f3f91e1-logs\") pod \"578c93b1-9bb7-4d11-9635-14805f3f91e1\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.800123 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14c30239-67eb-44a5-83cc-dbec561dade8-config\") pod \"14c30239-67eb-44a5-83cc-dbec561dade8\" (UID: \"14c30239-67eb-44a5-83cc-dbec561dade8\") " Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.800255 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqpjq\" (UniqueName: \"kubernetes.io/projected/578c93b1-9bb7-4d11-9635-14805f3f91e1-kube-api-access-sqpjq\") pod \"578c93b1-9bb7-4d11-9635-14805f3f91e1\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.800319 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/578c93b1-9bb7-4d11-9635-14805f3f91e1-horizon-secret-key\") pod \"578c93b1-9bb7-4d11-9635-14805f3f91e1\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.800448 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpfvq\" (UniqueName: \"kubernetes.io/projected/14c30239-67eb-44a5-83cc-dbec561dade8-kube-api-access-rpfvq\") pod \"14c30239-67eb-44a5-83cc-dbec561dade8\" (UID: \"14c30239-67eb-44a5-83cc-dbec561dade8\") " Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.800502 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/578c93b1-9bb7-4d11-9635-14805f3f91e1-scripts\") pod \"578c93b1-9bb7-4d11-9635-14805f3f91e1\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.800656 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/578c93b1-9bb7-4d11-9635-14805f3f91e1-config-data\") pod \"578c93b1-9bb7-4d11-9635-14805f3f91e1\" (UID: \"578c93b1-9bb7-4d11-9635-14805f3f91e1\") " Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.806330 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/578c93b1-9bb7-4d11-9635-14805f3f91e1-logs" (OuterVolumeSpecName: "logs") pod "578c93b1-9bb7-4d11-9635-14805f3f91e1" (UID: "578c93b1-9bb7-4d11-9635-14805f3f91e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.808070 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/578c93b1-9bb7-4d11-9635-14805f3f91e1-kube-api-access-sqpjq" (OuterVolumeSpecName: "kube-api-access-sqpjq") pod "578c93b1-9bb7-4d11-9635-14805f3f91e1" (UID: "578c93b1-9bb7-4d11-9635-14805f3f91e1"). InnerVolumeSpecName "kube-api-access-sqpjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.808865 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/578c93b1-9bb7-4d11-9635-14805f3f91e1-scripts" (OuterVolumeSpecName: "scripts") pod "578c93b1-9bb7-4d11-9635-14805f3f91e1" (UID: "578c93b1-9bb7-4d11-9635-14805f3f91e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.810351 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/578c93b1-9bb7-4d11-9635-14805f3f91e1-config-data" (OuterVolumeSpecName: "config-data") pod "578c93b1-9bb7-4d11-9635-14805f3f91e1" (UID: "578c93b1-9bb7-4d11-9635-14805f3f91e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.813232 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/578c93b1-9bb7-4d11-9635-14805f3f91e1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "578c93b1-9bb7-4d11-9635-14805f3f91e1" (UID: "578c93b1-9bb7-4d11-9635-14805f3f91e1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.813266 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c30239-67eb-44a5-83cc-dbec561dade8-kube-api-access-rpfvq" (OuterVolumeSpecName: "kube-api-access-rpfvq") pod "14c30239-67eb-44a5-83cc-dbec561dade8" (UID: "14c30239-67eb-44a5-83cc-dbec561dade8"). InnerVolumeSpecName "kube-api-access-rpfvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.836404 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c30239-67eb-44a5-83cc-dbec561dade8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14c30239-67eb-44a5-83cc-dbec561dade8" (UID: "14c30239-67eb-44a5-83cc-dbec561dade8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.842100 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c30239-67eb-44a5-83cc-dbec561dade8-config" (OuterVolumeSpecName: "config") pod "14c30239-67eb-44a5-83cc-dbec561dade8" (UID: "14c30239-67eb-44a5-83cc-dbec561dade8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.904509 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/578c93b1-9bb7-4d11-9635-14805f3f91e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.904588 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c30239-67eb-44a5-83cc-dbec561dade8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.904609 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/578c93b1-9bb7-4d11-9635-14805f3f91e1-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.904624 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/14c30239-67eb-44a5-83cc-dbec561dade8-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.904639 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqpjq\" (UniqueName: \"kubernetes.io/projected/578c93b1-9bb7-4d11-9635-14805f3f91e1-kube-api-access-sqpjq\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.904655 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/578c93b1-9bb7-4d11-9635-14805f3f91e1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.904666 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpfvq\" (UniqueName: \"kubernetes.io/projected/14c30239-67eb-44a5-83cc-dbec561dade8-kube-api-access-rpfvq\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:42 crc kubenswrapper[4832]: I0131 05:01:42.904677 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/578c93b1-9bb7-4d11-9635-14805f3f91e1-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.145934 4832 scope.go:117] "RemoveContainer" containerID="a95cdf00525a4f9548bfaf29ef5789ff38ec56f3f4d0514070eefc2d2fb46d0c" Jan 31 05:01:43 crc kubenswrapper[4832]: E0131 05:01:43.153025 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 31 05:01:43 crc kubenswrapper[4832]: E0131 05:01:43.153233 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n97hd9h649h5b8h57ch566hbbh688h59ch9dh99h5c8hc6hc5hbbh5b4hffh646h96h554h9h5fh5bbh7ch64ch57fh87hbdh9ch58ch697h5d6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzjpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46b0d335-0c75-4996-be63-bd416e988ced): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.304480 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.333538 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-dns-svc\") pod \"39e727aa-9180-4ebf-af96-20abf1d96bea\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.333625 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-ovsdbserver-nb\") pod \"39e727aa-9180-4ebf-af96-20abf1d96bea\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.333659 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-config\") pod \"39e727aa-9180-4ebf-af96-20abf1d96bea\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.333740 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8vs8\" (UniqueName: \"kubernetes.io/projected/39e727aa-9180-4ebf-af96-20abf1d96bea-kube-api-access-q8vs8\") pod \"39e727aa-9180-4ebf-af96-20abf1d96bea\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.334075 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-ovsdbserver-sb\") pod \"39e727aa-9180-4ebf-af96-20abf1d96bea\" (UID: \"39e727aa-9180-4ebf-af96-20abf1d96bea\") " Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.348938 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39e727aa-9180-4ebf-af96-20abf1d96bea-kube-api-access-q8vs8" (OuterVolumeSpecName: "kube-api-access-q8vs8") pod "39e727aa-9180-4ebf-af96-20abf1d96bea" (UID: "39e727aa-9180-4ebf-af96-20abf1d96bea"). InnerVolumeSpecName "kube-api-access-q8vs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.392134 4832 scope.go:117] "RemoveContainer" containerID="afceb362351186c114a3dd6b4f3851ad62a7cbc51599240dce33585f3b73c6df" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.419508 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "39e727aa-9180-4ebf-af96-20abf1d96bea" (UID: "39e727aa-9180-4ebf-af96-20abf1d96bea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.420276 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-config" (OuterVolumeSpecName: "config") pod "39e727aa-9180-4ebf-af96-20abf1d96bea" (UID: "39e727aa-9180-4ebf-af96-20abf1d96bea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.420453 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "39e727aa-9180-4ebf-af96-20abf1d96bea" (UID: "39e727aa-9180-4ebf-af96-20abf1d96bea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.436348 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8vs8\" (UniqueName: \"kubernetes.io/projected/39e727aa-9180-4ebf-af96-20abf1d96bea-kube-api-access-q8vs8\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.436713 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.436724 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.436735 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.437656 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "39e727aa-9180-4ebf-af96-20abf1d96bea" (UID: "39e727aa-9180-4ebf-af96-20abf1d96bea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.483257 4832 scope.go:117] "RemoveContainer" containerID="c07a5fbc0dfcf56726139259c1d5555a37020e17397be1fd170417d79f5e8cac" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.519605 4832 scope.go:117] "RemoveContainer" containerID="0e658aae6c64f67ba15cb2a91784051f659657d0a0478e2b587965e0acf8bd43" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.540108 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39e727aa-9180-4ebf-af96-20abf1d96bea-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.549130 4832 scope.go:117] "RemoveContainer" containerID="8e226e26834a8f83757fa4f354600bd2eea22ad03f7083f7b2f169af15719e45" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.643286 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fd59dbb48-vjkkx" event={"ID":"02f959e1-19ff-4f88-927b-ef2d3ee6d87e","Type":"ContainerStarted","Data":"4094aadeea02b4a16438623d6afa0de6059eba2de4f411c535f3de2e34dff690"} Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.651685 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.653679 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" event={"ID":"39e727aa-9180-4ebf-af96-20abf1d96bea","Type":"ContainerDied","Data":"1db3f11ccd7930dbcb469d4282a0a7b17f7bdddea720f84d321761bf00baf3a9"} Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.653754 4832 scope.go:117] "RemoveContainer" containerID="e2f878d85001e88b170f1a3b82b187a445e5a69df7d135da3d7d9830228130ea" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.678343 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b8df6f7-stcv6" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.679784 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9jhp9" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.708489 4832 scope.go:117] "RemoveContainer" containerID="f610b828839772cabec873637cdd80e941d787ed37c3c4aa27f4ec8666384255" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.709289 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8lwb5"] Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.730855 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-8lwb5"] Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.827018 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-596b8df6f7-stcv6"] Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.854238 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-596b8df6f7-stcv6"] Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.899083 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39e727aa-9180-4ebf-af96-20abf1d96bea" path="/var/lib/kubelet/pods/39e727aa-9180-4ebf-af96-20abf1d96bea/volumes" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.899919 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="578c93b1-9bb7-4d11-9635-14805f3f91e1" path="/var/lib/kubelet/pods/578c93b1-9bb7-4d11-9635-14805f3f91e1/volumes" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.900842 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58cb7047-9e1e-46d8-82b4-59a1af9d0937" path="/var/lib/kubelet/pods/58cb7047-9e1e-46d8-82b4-59a1af9d0937/volumes" Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.901361 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 05:01:43 crc kubenswrapper[4832]: I0131 05:01:43.936972 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r7sxw"] Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.086626 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c45f655bb-2jmz9"] Jan 31 05:01:44 crc kubenswrapper[4832]: E0131 05:01:44.087218 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e727aa-9180-4ebf-af96-20abf1d96bea" containerName="dnsmasq-dns" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.087243 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e727aa-9180-4ebf-af96-20abf1d96bea" containerName="dnsmasq-dns" Jan 31 05:01:44 crc kubenswrapper[4832]: E0131 05:01:44.087268 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c30239-67eb-44a5-83cc-dbec561dade8" containerName="neutron-db-sync" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.087276 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c30239-67eb-44a5-83cc-dbec561dade8" containerName="neutron-db-sync" Jan 31 05:01:44 crc kubenswrapper[4832]: E0131 05:01:44.087298 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e727aa-9180-4ebf-af96-20abf1d96bea" containerName="init" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.087308 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e727aa-9180-4ebf-af96-20abf1d96bea" containerName="init" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.087571 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e727aa-9180-4ebf-af96-20abf1d96bea" containerName="dnsmasq-dns" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.087603 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c30239-67eb-44a5-83cc-dbec561dade8" containerName="neutron-db-sync" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.088930 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.091784 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.094316 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.099991 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nqwks" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.100290 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.100513 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-j28pw"] Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.102345 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.107969 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c45f655bb-2jmz9"] Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.114510 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-j28pw"] Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.129588 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-t7kvk" podUID="c3a3c066-3065-4b65-9b5e-17ddb432a9aa" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.157881 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-dns-svc\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.158000 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvp5b\" (UniqueName: \"kubernetes.io/projected/cd0f7236-580a-416e-b24b-109b3eff6ac0-kube-api-access-qvp5b\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.158033 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-httpd-config\") pod \"neutron-c45f655bb-2jmz9\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.158066 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-ovndb-tls-certs\") pod \"neutron-c45f655bb-2jmz9\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.158097 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.158119 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.158151 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-config\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.158181 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.158202 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57fv4\" (UniqueName: \"kubernetes.io/projected/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-kube-api-access-57fv4\") pod \"neutron-c45f655bb-2jmz9\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.158304 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-config\") pod \"neutron-c45f655bb-2jmz9\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.158405 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-combined-ca-bundle\") pod \"neutron-c45f655bb-2jmz9\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.260994 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-combined-ca-bundle\") pod \"neutron-c45f655bb-2jmz9\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.261097 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-dns-svc\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.261160 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvp5b\" (UniqueName: \"kubernetes.io/projected/cd0f7236-580a-416e-b24b-109b3eff6ac0-kube-api-access-qvp5b\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.261202 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-httpd-config\") pod \"neutron-c45f655bb-2jmz9\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.261257 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-ovndb-tls-certs\") pod \"neutron-c45f655bb-2jmz9\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.261311 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.261339 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.261374 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-config\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.261412 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.261440 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57fv4\" (UniqueName: \"kubernetes.io/projected/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-kube-api-access-57fv4\") pod \"neutron-c45f655bb-2jmz9\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.261491 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-config\") pod \"neutron-c45f655bb-2jmz9\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.273600 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.275106 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.275259 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.276001 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-config\") pod \"neutron-c45f655bb-2jmz9\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.278776 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-config\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.279957 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-dns-svc\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.284330 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-httpd-config\") pod \"neutron-c45f655bb-2jmz9\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.285388 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-ovndb-tls-certs\") pod \"neutron-c45f655bb-2jmz9\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.285923 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-combined-ca-bundle\") pod \"neutron-c45f655bb-2jmz9\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.287062 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57fv4\" (UniqueName: \"kubernetes.io/projected/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-kube-api-access-57fv4\") pod \"neutron-c45f655bb-2jmz9\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.301652 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvp5b\" (UniqueName: \"kubernetes.io/projected/cd0f7236-580a-416e-b24b-109b3eff6ac0-kube-api-access-qvp5b\") pod \"dnsmasq-dns-6b7b667979-j28pw\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.357359 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.382215 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.728043 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b849fc549-b2htl" event={"ID":"9e522509-0496-4e05-b5d0-935f5ef2fc75","Type":"ContainerStarted","Data":"69a72e6374e6eb3698c30923b5814539f859cafba7da779ab7075fe752b7fe70"} Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.728511 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b849fc549-b2htl" event={"ID":"9e522509-0496-4e05-b5d0-935f5ef2fc75","Type":"ContainerStarted","Data":"573200e795413f77253a85b8d18b678fc53056ba1c11752bc41e464f70e904df"} Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.728729 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b849fc549-b2htl" podUID="9e522509-0496-4e05-b5d0-935f5ef2fc75" containerName="horizon-log" containerID="cri-o://69a72e6374e6eb3698c30923b5814539f859cafba7da779ab7075fe752b7fe70" gracePeriod=30 Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.729333 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b849fc549-b2htl" podUID="9e522509-0496-4e05-b5d0-935f5ef2fc75" containerName="horizon" containerID="cri-o://573200e795413f77253a85b8d18b678fc53056ba1c11752bc41e464f70e904df" gracePeriod=30 Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.776962 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-b849fc549-b2htl" podStartSLOduration=7.344608777 podStartE2EDuration="36.776934908s" podCreationTimestamp="2026-01-31 05:01:08 +0000 UTC" firstStartedPulling="2026-01-31 05:01:13.134740725 +0000 UTC m=+1082.083562410" lastFinishedPulling="2026-01-31 05:01:42.567066866 +0000 UTC m=+1111.515888541" observedRunningTime="2026-01-31 05:01:44.776165534 +0000 UTC m=+1113.724987219" watchObservedRunningTime="2026-01-31 05:01:44.776934908 +0000 UTC m=+1113.725756593" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.786163 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6b9f547b-mrjcq" event={"ID":"769ea643-f342-413c-a719-7c65e086b9eb","Type":"ContainerStarted","Data":"6a58c782036b48d769016460ecd824c34c528f25e842897d9f14dca703082f60"} Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.786311 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6b9f547b-mrjcq" event={"ID":"769ea643-f342-413c-a719-7c65e086b9eb","Type":"ContainerStarted","Data":"58b4eb5b72d37e47e5803ddd198d57385fdd7bc5c2517a92f474fed9535539a4"} Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.807511 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.829945 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nvbkz" event={"ID":"2a8a734d-c7f4-4fd7-b64f-a053592ee909","Type":"ContainerStarted","Data":"8f9daae95a73bd91a84eeb49f287a1705462ff7d7d9b0f3cd67b627a09d0fc84"} Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.835960 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f6b9f547b-mrjcq" podStartSLOduration=5.646203753 podStartE2EDuration="29.835935702s" podCreationTimestamp="2026-01-31 05:01:15 +0000 UTC" firstStartedPulling="2026-01-31 05:01:19.047782626 +0000 UTC m=+1087.996604351" lastFinishedPulling="2026-01-31 05:01:43.237514615 +0000 UTC m=+1112.186336300" observedRunningTime="2026-01-31 05:01:44.808228721 +0000 UTC m=+1113.757050406" watchObservedRunningTime="2026-01-31 05:01:44.835935702 +0000 UTC m=+1113.784757387" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.874902 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a5e58db6-3021-4917-8f51-be18dd5bb77e","Type":"ContainerStarted","Data":"eea85042b0a48b43acbdc30e9d59651e1560f2e997539968e180981b68ab7d5b"} Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.918055 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-nvbkz" podStartSLOduration=4.309710663 podStartE2EDuration="38.918020505s" podCreationTimestamp="2026-01-31 05:01:06 +0000 UTC" firstStartedPulling="2026-01-31 05:01:07.959594971 +0000 UTC m=+1076.908416656" lastFinishedPulling="2026-01-31 05:01:42.567904793 +0000 UTC m=+1111.516726498" observedRunningTime="2026-01-31 05:01:44.875343088 +0000 UTC m=+1113.824164773" watchObservedRunningTime="2026-01-31 05:01:44.918020505 +0000 UTC m=+1113.866842190" Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.932980 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fd59dbb48-vjkkx" event={"ID":"02f959e1-19ff-4f88-927b-ef2d3ee6d87e","Type":"ContainerStarted","Data":"0cd477ddc7e9602ba61a00d26536bd6addd4e75804a0663968e33e48626e5a28"} Jan 31 05:01:44 crc kubenswrapper[4832]: I0131 05:01:44.964861 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r7sxw" event={"ID":"4a775d6b-5610-4b98-a570-8e98c9cadfd2","Type":"ContainerStarted","Data":"23daaec6d688849904c40bdf3e5f65c94e7664d233a917c3c047742307e5f253"} Jan 31 05:01:45 crc kubenswrapper[4832]: I0131 05:01:45.008474 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7fd59dbb48-vjkkx" podStartSLOduration=5.833511478 podStartE2EDuration="30.008443167s" podCreationTimestamp="2026-01-31 05:01:15 +0000 UTC" firstStartedPulling="2026-01-31 05:01:19.062618647 +0000 UTC m=+1088.011440352" lastFinishedPulling="2026-01-31 05:01:43.237550356 +0000 UTC m=+1112.186372041" observedRunningTime="2026-01-31 05:01:45.007870679 +0000 UTC m=+1113.956692364" watchObservedRunningTime="2026-01-31 05:01:45.008443167 +0000 UTC m=+1113.957264882" Jan 31 05:01:45 crc kubenswrapper[4832]: I0131 05:01:45.084541 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-r7sxw" podStartSLOduration=4.084520763 podStartE2EDuration="4.084520763s" podCreationTimestamp="2026-01-31 05:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:45.081933963 +0000 UTC m=+1114.030755648" watchObservedRunningTime="2026-01-31 05:01:45.084520763 +0000 UTC m=+1114.033342448" Jan 31 05:01:45 crc kubenswrapper[4832]: I0131 05:01:45.217042 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-j28pw"] Jan 31 05:01:45 crc kubenswrapper[4832]: I0131 05:01:45.499291 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c45f655bb-2jmz9"] Jan 31 05:01:45 crc kubenswrapper[4832]: I0131 05:01:45.981915 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c45f655bb-2jmz9" event={"ID":"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0","Type":"ContainerStarted","Data":"2f781ace961fc09e1164f6a168e6a7ce8319d9b9748e1815251e3d800c8d6a68"} Jan 31 05:01:45 crc kubenswrapper[4832]: I0131 05:01:45.985153 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a5e58db6-3021-4917-8f51-be18dd5bb77e","Type":"ContainerStarted","Data":"76f998b1e804798c08d65ea44f5ba2aa8511eca9fb1b4b7092f30d05170b0879"} Jan 31 05:01:45 crc kubenswrapper[4832]: I0131 05:01:45.986588 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a3186400-0bbc-4891-929e-471a0c30b648","Type":"ContainerStarted","Data":"3c08766284e82325842569207ac353c8237960cae18f2fbab4c2bbdaf7c8426b"} Jan 31 05:01:45 crc kubenswrapper[4832]: I0131 05:01:45.986617 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a3186400-0bbc-4891-929e-471a0c30b648","Type":"ContainerStarted","Data":"ae4038edf8693f21d7a51c52fd1edcead078ffeb845f9646ceeb35b8f9bdc92e"} Jan 31 05:01:45 crc kubenswrapper[4832]: I0131 05:01:45.987954 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r7sxw" event={"ID":"4a775d6b-5610-4b98-a570-8e98c9cadfd2","Type":"ContainerStarted","Data":"1b40b92839680cdf521e100a2fa3b6eb94638df0fe404a0b4df5a594982ee426"} Jan 31 05:01:45 crc kubenswrapper[4832]: I0131 05:01:45.991158 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-j28pw" event={"ID":"cd0f7236-580a-416e-b24b-109b3eff6ac0","Type":"ContainerStarted","Data":"25e5af0ed5711a554933daba0d7a4cd8526fa3cadad1604b97b07f7bc363f91f"} Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.102796 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.102894 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.303168 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.303606 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.514393 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84fb4958f7-pczxt"] Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.516028 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.526942 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.528668 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.560140 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfwcr\" (UniqueName: \"kubernetes.io/projected/c7ba646e-a7ee-4e60-8396-5a9eb0590378-kube-api-access-kfwcr\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.560198 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-ovndb-tls-certs\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.560401 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-public-tls-certs\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.560897 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-internal-tls-certs\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.561130 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-httpd-config\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.561209 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-combined-ca-bundle\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.561289 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-config\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.577676 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84fb4958f7-pczxt"] Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.669798 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-httpd-config\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.669849 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-combined-ca-bundle\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.669883 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-config\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.669909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfwcr\" (UniqueName: \"kubernetes.io/projected/c7ba646e-a7ee-4e60-8396-5a9eb0590378-kube-api-access-kfwcr\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.669930 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-ovndb-tls-certs\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.670255 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-public-tls-certs\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.670353 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-internal-tls-certs\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.691944 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-combined-ca-bundle\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.692985 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-ovndb-tls-certs\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.702401 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-httpd-config\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.706425 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-internal-tls-certs\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.706873 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-config\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.709829 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-public-tls-certs\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.718241 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfwcr\" (UniqueName: \"kubernetes.io/projected/c7ba646e-a7ee-4e60-8396-5a9eb0590378-kube-api-access-kfwcr\") pod \"neutron-84fb4958f7-pczxt\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:46 crc kubenswrapper[4832]: I0131 05:01:46.850145 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:47 crc kubenswrapper[4832]: I0131 05:01:47.023657 4832 generic.go:334] "Generic (PLEG): container finished" podID="cd0f7236-580a-416e-b24b-109b3eff6ac0" containerID="d643038de9befe52f280166c184b89011f274b4595f403c9e73826c0937aeb1b" exitCode=0 Jan 31 05:01:47 crc kubenswrapper[4832]: I0131 05:01:47.024194 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-j28pw" event={"ID":"cd0f7236-580a-416e-b24b-109b3eff6ac0","Type":"ContainerDied","Data":"d643038de9befe52f280166c184b89011f274b4595f403c9e73826c0937aeb1b"} Jan 31 05:01:47 crc kubenswrapper[4832]: I0131 05:01:47.067653 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c45f655bb-2jmz9" event={"ID":"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0","Type":"ContainerStarted","Data":"0adb770fffda48e8d6e6a7859b642de3672c588b793b6c40c38872ab6c2059b7"} Jan 31 05:01:47 crc kubenswrapper[4832]: I0131 05:01:47.077812 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46b0d335-0c75-4996-be63-bd416e988ced","Type":"ContainerStarted","Data":"346c6915e52f127ba3d2807fea4a516016f6e64ba1c3cdded6c8370c6acfc764"} Jan 31 05:01:47 crc kubenswrapper[4832]: I0131 05:01:47.094145 4832 generic.go:334] "Generic (PLEG): container finished" podID="2a8a734d-c7f4-4fd7-b64f-a053592ee909" containerID="8f9daae95a73bd91a84eeb49f287a1705462ff7d7d9b0f3cd67b627a09d0fc84" exitCode=0 Jan 31 05:01:47 crc kubenswrapper[4832]: I0131 05:01:47.097016 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nvbkz" event={"ID":"2a8a734d-c7f4-4fd7-b64f-a053592ee909","Type":"ContainerDied","Data":"8f9daae95a73bd91a84eeb49f287a1705462ff7d7d9b0f3cd67b627a09d0fc84"} Jan 31 05:01:47 crc kubenswrapper[4832]: I0131 05:01:47.633810 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84fb4958f7-pczxt"] Jan 31 05:01:47 crc kubenswrapper[4832]: W0131 05:01:47.662380 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7ba646e_a7ee_4e60_8396_5a9eb0590378.slice/crio-1d3157bc2913527613e8355614024898bcf0037fc69d7f70e49bb02cc0ac67e7 WatchSource:0}: Error finding container 1d3157bc2913527613e8355614024898bcf0037fc69d7f70e49bb02cc0ac67e7: Status 404 returned error can't find the container with id 1d3157bc2913527613e8355614024898bcf0037fc69d7f70e49bb02cc0ac67e7 Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.214858 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c45f655bb-2jmz9" event={"ID":"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0","Type":"ContainerStarted","Data":"f2b58cf48366cb8fa2e610a60a2e1dd027dc03184f8b9d6d9337d9b04dd2e448"} Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.217180 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.217306 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-8lwb5" podUID="39e727aa-9180-4ebf-af96-20abf1d96bea" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.233897 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a5e58db6-3021-4917-8f51-be18dd5bb77e","Type":"ContainerStarted","Data":"62c0188a0762d2976aae6a9904c7e97f5e0b0e87e44bd61af2177f10558d45e6"} Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.246073 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a3186400-0bbc-4891-929e-471a0c30b648","Type":"ContainerStarted","Data":"589a20cd3e82be252115be21a675d4f8531512e35f5b3ad17c0d580ed7e07793"} Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.273157 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-j28pw" event={"ID":"cd0f7236-580a-416e-b24b-109b3eff6ac0","Type":"ContainerStarted","Data":"d9cc72cbd5d2fa881db72d46e3877e674d1448590716a74b614ab6473fe4f70e"} Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.274155 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.294025 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84fb4958f7-pczxt" event={"ID":"c7ba646e-a7ee-4e60-8396-5a9eb0590378","Type":"ContainerStarted","Data":"334190742e1382c51d3e3a093dc83cc88edc3155581a81e4ffe60739732f9b6a"} Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.294102 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84fb4958f7-pczxt" event={"ID":"c7ba646e-a7ee-4e60-8396-5a9eb0590378","Type":"ContainerStarted","Data":"1d3157bc2913527613e8355614024898bcf0037fc69d7f70e49bb02cc0ac67e7"} Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.358437 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=29.358406332 podStartE2EDuration="29.358406332s" podCreationTimestamp="2026-01-31 05:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:48.34450281 +0000 UTC m=+1117.293324495" watchObservedRunningTime="2026-01-31 05:01:48.358406332 +0000 UTC m=+1117.307228007" Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.395126 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c45f655bb-2jmz9" podStartSLOduration=4.395081113 podStartE2EDuration="4.395081113s" podCreationTimestamp="2026-01-31 05:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:48.257230426 +0000 UTC m=+1117.206052111" watchObservedRunningTime="2026-01-31 05:01:48.395081113 +0000 UTC m=+1117.343902798" Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.522306 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=29.522280719 podStartE2EDuration="29.522280719s" podCreationTimestamp="2026-01-31 05:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:48.407944704 +0000 UTC m=+1117.356766409" watchObservedRunningTime="2026-01-31 05:01:48.522280719 +0000 UTC m=+1117.471102404" Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.539872 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.539942 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.568549 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-j28pw" podStartSLOduration=4.568510096 podStartE2EDuration="4.568510096s" podCreationTimestamp="2026-01-31 05:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:48.490109628 +0000 UTC m=+1117.438931303" watchObservedRunningTime="2026-01-31 05:01:48.568510096 +0000 UTC m=+1117.517331781" Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.858765 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.943330 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-scripts\") pod \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.943451 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-config-data\") pod \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.943507 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-combined-ca-bundle\") pod \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.943803 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rdt4\" (UniqueName: \"kubernetes.io/projected/2a8a734d-c7f4-4fd7-b64f-a053592ee909-kube-api-access-5rdt4\") pod \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.943847 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8a734d-c7f4-4fd7-b64f-a053592ee909-logs\") pod \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\" (UID: \"2a8a734d-c7f4-4fd7-b64f-a053592ee909\") " Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.945292 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a8a734d-c7f4-4fd7-b64f-a053592ee909-logs" (OuterVolumeSpecName: "logs") pod "2a8a734d-c7f4-4fd7-b64f-a053592ee909" (UID: "2a8a734d-c7f4-4fd7-b64f-a053592ee909"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.963765 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-scripts" (OuterVolumeSpecName: "scripts") pod "2a8a734d-c7f4-4fd7-b64f-a053592ee909" (UID: "2a8a734d-c7f4-4fd7-b64f-a053592ee909"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:48 crc kubenswrapper[4832]: I0131 05:01:48.963830 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a8a734d-c7f4-4fd7-b64f-a053592ee909-kube-api-access-5rdt4" (OuterVolumeSpecName: "kube-api-access-5rdt4") pod "2a8a734d-c7f4-4fd7-b64f-a053592ee909" (UID: "2a8a734d-c7f4-4fd7-b64f-a053592ee909"). InnerVolumeSpecName "kube-api-access-5rdt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.003789 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-config-data" (OuterVolumeSpecName: "config-data") pod "2a8a734d-c7f4-4fd7-b64f-a053592ee909" (UID: "2a8a734d-c7f4-4fd7-b64f-a053592ee909"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.010710 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a8a734d-c7f4-4fd7-b64f-a053592ee909" (UID: "2a8a734d-c7f4-4fd7-b64f-a053592ee909"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.046234 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rdt4\" (UniqueName: \"kubernetes.io/projected/2a8a734d-c7f4-4fd7-b64f-a053592ee909-kube-api-access-5rdt4\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.046978 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8a734d-c7f4-4fd7-b64f-a053592ee909-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.047053 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.047109 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.047174 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8a734d-c7f4-4fd7-b64f-a053592ee909-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.159776 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.277760 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7459856588-428fk"] Jan 31 05:01:49 crc kubenswrapper[4832]: E0131 05:01:49.280592 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a8a734d-c7f4-4fd7-b64f-a053592ee909" containerName="placement-db-sync" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.280696 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a8a734d-c7f4-4fd7-b64f-a053592ee909" containerName="placement-db-sync" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.281014 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a8a734d-c7f4-4fd7-b64f-a053592ee909" containerName="placement-db-sync" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.282308 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.287696 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.287959 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.296099 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7459856588-428fk"] Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.332252 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84fb4958f7-pczxt" event={"ID":"c7ba646e-a7ee-4e60-8396-5a9eb0590378","Type":"ContainerStarted","Data":"12ac270587f81dc1ae2cb82016d79dc5ae88f9b329d9cb2f837f4b4f3f886a56"} Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.333189 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.343614 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nvbkz" event={"ID":"2a8a734d-c7f4-4fd7-b64f-a053592ee909","Type":"ContainerDied","Data":"8cfbbd7883dfd84f5308528f01a67baef7600d397b497d2c29b10b368df53c3d"} Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.343657 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cfbbd7883dfd84f5308528f01a67baef7600d397b497d2c29b10b368df53c3d" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.343711 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nvbkz" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.377658 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-config-data\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.377771 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gvth\" (UniqueName: \"kubernetes.io/projected/6eee292a-4bfc-4a13-9c27-4d381520e7e9-kube-api-access-8gvth\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.377909 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eee292a-4bfc-4a13-9c27-4d381520e7e9-logs\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.377948 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-public-tls-certs\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.377982 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-scripts\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.378021 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-internal-tls-certs\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.378077 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-combined-ca-bundle\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.381087 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84fb4958f7-pczxt" podStartSLOduration=3.3810602149999998 podStartE2EDuration="3.381060215s" podCreationTimestamp="2026-01-31 05:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:49.364777328 +0000 UTC m=+1118.313599014" watchObservedRunningTime="2026-01-31 05:01:49.381060215 +0000 UTC m=+1118.329881900" Jan 31 05:01:49 crc kubenswrapper[4832]: E0131 05:01:49.414013 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a8a734d_c7f4_4fd7_b64f_a053592ee909.slice\": RecentStats: unable to find data in memory cache]" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.480385 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-config-data\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.480496 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gvth\" (UniqueName: \"kubernetes.io/projected/6eee292a-4bfc-4a13-9c27-4d381520e7e9-kube-api-access-8gvth\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.480716 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eee292a-4bfc-4a13-9c27-4d381520e7e9-logs\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.480750 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-public-tls-certs\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.480774 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-scripts\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.480837 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-internal-tls-certs\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.480952 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-combined-ca-bundle\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.486637 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eee292a-4bfc-4a13-9c27-4d381520e7e9-logs\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.491912 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-config-data\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.496753 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-public-tls-certs\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.504180 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gvth\" (UniqueName: \"kubernetes.io/projected/6eee292a-4bfc-4a13-9c27-4d381520e7e9-kube-api-access-8gvth\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.505186 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-combined-ca-bundle\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.505695 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-scripts\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.510624 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-internal-tls-certs\") pod \"placement-7459856588-428fk\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " pod="openstack/placement-7459856588-428fk" Jan 31 05:01:49 crc kubenswrapper[4832]: I0131 05:01:49.651004 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7459856588-428fk" Jan 31 05:01:50 crc kubenswrapper[4832]: I0131 05:01:50.093836 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:50 crc kubenswrapper[4832]: I0131 05:01:50.093900 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:50 crc kubenswrapper[4832]: I0131 05:01:50.093911 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:50 crc kubenswrapper[4832]: I0131 05:01:50.093921 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:50 crc kubenswrapper[4832]: I0131 05:01:50.141445 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 05:01:50 crc kubenswrapper[4832]: I0131 05:01:50.155380 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 05:01:50 crc kubenswrapper[4832]: I0131 05:01:50.155782 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:50 crc kubenswrapper[4832]: I0131 05:01:50.155801 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 05:01:50 crc kubenswrapper[4832]: I0131 05:01:50.155810 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 05:01:50 crc kubenswrapper[4832]: I0131 05:01:50.201292 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:50 crc kubenswrapper[4832]: I0131 05:01:50.237466 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 05:01:50 crc kubenswrapper[4832]: I0131 05:01:50.240057 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 05:01:50 crc kubenswrapper[4832]: I0131 05:01:50.264127 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7459856588-428fk"] Jan 31 05:01:50 crc kubenswrapper[4832]: W0131 05:01:50.280663 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6eee292a_4bfc_4a13_9c27_4d381520e7e9.slice/crio-5c11468336b5fdbb0eeda2a1fd90a5f37dfc56c2fc8b648fc58f7af43943e63d WatchSource:0}: Error finding container 5c11468336b5fdbb0eeda2a1fd90a5f37dfc56c2fc8b648fc58f7af43943e63d: Status 404 returned error can't find the container with id 5c11468336b5fdbb0eeda2a1fd90a5f37dfc56c2fc8b648fc58f7af43943e63d Jan 31 05:01:50 crc kubenswrapper[4832]: I0131 05:01:50.375817 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7459856588-428fk" event={"ID":"6eee292a-4bfc-4a13-9c27-4d381520e7e9","Type":"ContainerStarted","Data":"5c11468336b5fdbb0eeda2a1fd90a5f37dfc56c2fc8b648fc58f7af43943e63d"} Jan 31 05:01:51 crc kubenswrapper[4832]: I0131 05:01:51.387021 4832 generic.go:334] "Generic (PLEG): container finished" podID="4a775d6b-5610-4b98-a570-8e98c9cadfd2" containerID="1b40b92839680cdf521e100a2fa3b6eb94638df0fe404a0b4df5a594982ee426" exitCode=0 Jan 31 05:01:51 crc kubenswrapper[4832]: I0131 05:01:51.387145 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r7sxw" event={"ID":"4a775d6b-5610-4b98-a570-8e98c9cadfd2","Type":"ContainerDied","Data":"1b40b92839680cdf521e100a2fa3b6eb94638df0fe404a0b4df5a594982ee426"} Jan 31 05:01:51 crc kubenswrapper[4832]: I0131 05:01:51.392502 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7459856588-428fk" event={"ID":"6eee292a-4bfc-4a13-9c27-4d381520e7e9","Type":"ContainerStarted","Data":"fe7b6ad6b226ece1c0b334931e5e1060ff4722f1f04ef225d248550a9dd0df4e"} Jan 31 05:01:51 crc kubenswrapper[4832]: I0131 05:01:51.392642 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7459856588-428fk" event={"ID":"6eee292a-4bfc-4a13-9c27-4d381520e7e9","Type":"ContainerStarted","Data":"204f44d3e0b0ce04695dc29baa0901724195b2fe33957f5a8fb73bf893740d7a"} Jan 31 05:01:51 crc kubenswrapper[4832]: I0131 05:01:51.392762 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7459856588-428fk" Jan 31 05:01:51 crc kubenswrapper[4832]: I0131 05:01:51.392785 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7459856588-428fk" Jan 31 05:01:51 crc kubenswrapper[4832]: I0131 05:01:51.447520 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7459856588-428fk" podStartSLOduration=2.447499726 podStartE2EDuration="2.447499726s" podCreationTimestamp="2026-01-31 05:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:51.442202861 +0000 UTC m=+1120.391024546" watchObservedRunningTime="2026-01-31 05:01:51.447499726 +0000 UTC m=+1120.396321401" Jan 31 05:01:53 crc kubenswrapper[4832]: I0131 05:01:53.150746 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:53 crc kubenswrapper[4832]: I0131 05:01:53.239463 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 05:01:54 crc kubenswrapper[4832]: I0131 05:01:54.383762 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:01:54 crc kubenswrapper[4832]: I0131 05:01:54.477366 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-r442w"] Jan 31 05:01:54 crc kubenswrapper[4832]: I0131 05:01:54.477673 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" podUID="45635b27-9b86-4573-866f-74163da166b0" containerName="dnsmasq-dns" containerID="cri-o://9517948a835a5ad3f9dd42f45078f212904c0516d7f439ff50dacf8e4ba5aa25" gracePeriod=10 Jan 31 05:01:55 crc kubenswrapper[4832]: I0131 05:01:55.487541 4832 generic.go:334] "Generic (PLEG): container finished" podID="45635b27-9b86-4573-866f-74163da166b0" containerID="9517948a835a5ad3f9dd42f45078f212904c0516d7f439ff50dacf8e4ba5aa25" exitCode=0 Jan 31 05:01:55 crc kubenswrapper[4832]: I0131 05:01:55.487606 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" event={"ID":"45635b27-9b86-4573-866f-74163da166b0","Type":"ContainerDied","Data":"9517948a835a5ad3f9dd42f45078f212904c0516d7f439ff50dacf8e4ba5aa25"} Jan 31 05:01:55 crc kubenswrapper[4832]: I0131 05:01:55.558134 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 05:01:55 crc kubenswrapper[4832]: I0131 05:01:55.742002 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 05:01:56 crc kubenswrapper[4832]: I0131 05:01:56.119259 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f6b9f547b-mrjcq" podUID="769ea643-f342-413c-a719-7c65e086b9eb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 31 05:01:56 crc kubenswrapper[4832]: I0131 05:01:56.305112 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7fd59dbb48-vjkkx" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 05:01:57 crc kubenswrapper[4832]: I0131 05:01:57.352470 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" podUID="45635b27-9b86-4573-866f-74163da166b0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: connect: connection refused" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.567348 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.656315 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.701381 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-combined-ca-bundle\") pod \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.702464 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-credential-keys\") pod \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.702705 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-scripts\") pod \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.702843 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cckh7\" (UniqueName: \"kubernetes.io/projected/4a775d6b-5610-4b98-a570-8e98c9cadfd2-kube-api-access-cckh7\") pod \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.703130 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-fernet-keys\") pod \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.703631 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-config-data\") pod \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.713224 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r7sxw" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.713230 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r7sxw" event={"ID":"4a775d6b-5610-4b98-a570-8e98c9cadfd2","Type":"ContainerDied","Data":"23daaec6d688849904c40bdf3e5f65c94e7664d233a917c3c047742307e5f253"} Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.713392 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23daaec6d688849904c40bdf3e5f65c94e7664d233a917c3c047742307e5f253" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.715714 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-scripts" (OuterVolumeSpecName: "scripts") pod "4a775d6b-5610-4b98-a570-8e98c9cadfd2" (UID: "4a775d6b-5610-4b98-a570-8e98c9cadfd2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.716446 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" event={"ID":"45635b27-9b86-4573-866f-74163da166b0","Type":"ContainerDied","Data":"e740cb3551e662230f9190f195a720ac52c9bd48c02f77c66bff9c37bde1e563"} Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.716516 4832 scope.go:117] "RemoveContainer" containerID="9517948a835a5ad3f9dd42f45078f212904c0516d7f439ff50dacf8e4ba5aa25" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.716748 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-r442w" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.717851 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4a775d6b-5610-4b98-a570-8e98c9cadfd2" (UID: "4a775d6b-5610-4b98-a570-8e98c9cadfd2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.720028 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a775d6b-5610-4b98-a570-8e98c9cadfd2-kube-api-access-cckh7" (OuterVolumeSpecName: "kube-api-access-cckh7") pod "4a775d6b-5610-4b98-a570-8e98c9cadfd2" (UID: "4a775d6b-5610-4b98-a570-8e98c9cadfd2"). InnerVolumeSpecName "kube-api-access-cckh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.722667 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4a775d6b-5610-4b98-a570-8e98c9cadfd2" (UID: "4a775d6b-5610-4b98-a570-8e98c9cadfd2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.764140 4832 scope.go:117] "RemoveContainer" containerID="039005b6b3adabbc0bcac6697541099c789176a1a1c5283de06a36e58dfd5384" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.806063 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-dns-svc\") pod \"45635b27-9b86-4573-866f-74163da166b0\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " Jan 31 05:01:58 crc kubenswrapper[4832]: E0131 05:01:58.806075 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-config-data podName:4a775d6b-5610-4b98-a570-8e98c9cadfd2 nodeName:}" failed. No retries permitted until 2026-01-31 05:01:59.306043748 +0000 UTC m=+1128.254865433 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-config-data") pod "4a775d6b-5610-4b98-a570-8e98c9cadfd2" (UID: "4a775d6b-5610-4b98-a570-8e98c9cadfd2") : error deleting /var/lib/kubelet/pods/4a775d6b-5610-4b98-a570-8e98c9cadfd2/volume-subpaths: remove /var/lib/kubelet/pods/4a775d6b-5610-4b98-a570-8e98c9cadfd2/volume-subpaths: no such file or directory Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.806442 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8ck2\" (UniqueName: \"kubernetes.io/projected/45635b27-9b86-4573-866f-74163da166b0-kube-api-access-z8ck2\") pod \"45635b27-9b86-4573-866f-74163da166b0\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.807033 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-config\") pod \"45635b27-9b86-4573-866f-74163da166b0\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.807138 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-ovsdbserver-sb\") pod \"45635b27-9b86-4573-866f-74163da166b0\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.807332 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-dns-swift-storage-0\") pod \"45635b27-9b86-4573-866f-74163da166b0\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.807585 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-ovsdbserver-nb\") pod \"45635b27-9b86-4573-866f-74163da166b0\" (UID: \"45635b27-9b86-4573-866f-74163da166b0\") " Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.808189 4832 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.808292 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.809099 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cckh7\" (UniqueName: \"kubernetes.io/projected/4a775d6b-5610-4b98-a570-8e98c9cadfd2-kube-api-access-cckh7\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.809188 4832 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.817254 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45635b27-9b86-4573-866f-74163da166b0-kube-api-access-z8ck2" (OuterVolumeSpecName: "kube-api-access-z8ck2") pod "45635b27-9b86-4573-866f-74163da166b0" (UID: "45635b27-9b86-4573-866f-74163da166b0"). InnerVolumeSpecName "kube-api-access-z8ck2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.822145 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a775d6b-5610-4b98-a570-8e98c9cadfd2" (UID: "4a775d6b-5610-4b98-a570-8e98c9cadfd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.883421 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "45635b27-9b86-4573-866f-74163da166b0" (UID: "45635b27-9b86-4573-866f-74163da166b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.887270 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "45635b27-9b86-4573-866f-74163da166b0" (UID: "45635b27-9b86-4573-866f-74163da166b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.893432 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-config" (OuterVolumeSpecName: "config") pod "45635b27-9b86-4573-866f-74163da166b0" (UID: "45635b27-9b86-4573-866f-74163da166b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.897449 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45635b27-9b86-4573-866f-74163da166b0" (UID: "45635b27-9b86-4573-866f-74163da166b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.898357 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "45635b27-9b86-4573-866f-74163da166b0" (UID: "45635b27-9b86-4573-866f-74163da166b0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.911207 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.911242 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8ck2\" (UniqueName: \"kubernetes.io/projected/45635b27-9b86-4573-866f-74163da166b0-kube-api-access-z8ck2\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.911255 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.911267 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.911278 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.911287 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:58 crc kubenswrapper[4832]: I0131 05:01:58.911298 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45635b27-9b86-4573-866f-74163da166b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.057387 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-r442w"] Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.071962 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-r442w"] Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.318758 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-config-data\") pod \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\" (UID: \"4a775d6b-5610-4b98-a570-8e98c9cadfd2\") " Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.328320 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-config-data" (OuterVolumeSpecName: "config-data") pod "4a775d6b-5610-4b98-a570-8e98c9cadfd2" (UID: "4a775d6b-5610-4b98-a570-8e98c9cadfd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.424959 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a775d6b-5610-4b98-a570-8e98c9cadfd2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.693808 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-79bb65dc58-kdbq7"] Jan 31 05:01:59 crc kubenswrapper[4832]: E0131 05:01:59.695256 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45635b27-9b86-4573-866f-74163da166b0" containerName="init" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.695274 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="45635b27-9b86-4573-866f-74163da166b0" containerName="init" Jan 31 05:01:59 crc kubenswrapper[4832]: E0131 05:01:59.695286 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45635b27-9b86-4573-866f-74163da166b0" containerName="dnsmasq-dns" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.695294 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="45635b27-9b86-4573-866f-74163da166b0" containerName="dnsmasq-dns" Jan 31 05:01:59 crc kubenswrapper[4832]: E0131 05:01:59.695315 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a775d6b-5610-4b98-a570-8e98c9cadfd2" containerName="keystone-bootstrap" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.695322 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a775d6b-5610-4b98-a570-8e98c9cadfd2" containerName="keystone-bootstrap" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.695539 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="45635b27-9b86-4573-866f-74163da166b0" containerName="dnsmasq-dns" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.695573 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a775d6b-5610-4b98-a570-8e98c9cadfd2" containerName="keystone-bootstrap" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.696231 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.701044 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.701398 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7z4jm" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.701503 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.701593 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.701719 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.701790 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.747990 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7r8nc" event={"ID":"bcc76655-d4cd-47c7-be0c-21e52514fe92","Type":"ContainerStarted","Data":"771972d1658a47aef4a32953aea27c1123597ac57dbd26b26ec77f854939109f"} Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.753398 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79bb65dc58-kdbq7"] Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.768743 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46b0d335-0c75-4996-be63-bd416e988ced","Type":"ContainerStarted","Data":"83255fcc723cb51bd88d4f3864c113117b61861f708cfc8fb92e75bd63476553"} Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.772895 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-z4rhk" event={"ID":"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36","Type":"ContainerStarted","Data":"7b00b61e86023088c49b00534964dafa1ae8cd8796cdf096fe53588e0f7cdd38"} Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.779415 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7r8nc" podStartSLOduration=4.101233643 podStartE2EDuration="54.779394057s" podCreationTimestamp="2026-01-31 05:01:05 +0000 UTC" firstStartedPulling="2026-01-31 05:01:07.727364779 +0000 UTC m=+1076.676186464" lastFinishedPulling="2026-01-31 05:01:58.405525193 +0000 UTC m=+1127.354346878" observedRunningTime="2026-01-31 05:01:59.766704862 +0000 UTC m=+1128.715526547" watchObservedRunningTime="2026-01-31 05:01:59.779394057 +0000 UTC m=+1128.728215742" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.814454 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-z4rhk" podStartSLOduration=7.799038332 podStartE2EDuration="53.814429076s" podCreationTimestamp="2026-01-31 05:01:06 +0000 UTC" firstStartedPulling="2026-01-31 05:01:12.385684251 +0000 UTC m=+1081.334505946" lastFinishedPulling="2026-01-31 05:01:58.401075005 +0000 UTC m=+1127.349896690" observedRunningTime="2026-01-31 05:01:59.787015274 +0000 UTC m=+1128.735836959" watchObservedRunningTime="2026-01-31 05:01:59.814429076 +0000 UTC m=+1128.763250771" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.832906 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-public-tls-certs\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.832990 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-combined-ca-bundle\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.833041 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-fernet-keys\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.833117 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-internal-tls-certs\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.833184 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-credential-keys\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.833224 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-config-data\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.833275 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvxbc\" (UniqueName: \"kubernetes.io/projected/a8150cab-aaf2-42f5-8148-ffb124e56569-kube-api-access-dvxbc\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.833302 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-scripts\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: E0131 05:01:59.866441 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a775d6b_5610_4b98_a570_8e98c9cadfd2.slice/crio-23daaec6d688849904c40bdf3e5f65c94e7664d233a917c3c047742307e5f253\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a775d6b_5610_4b98_a570_8e98c9cadfd2.slice\": RecentStats: unable to find data in memory cache]" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.879925 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45635b27-9b86-4573-866f-74163da166b0" path="/var/lib/kubelet/pods/45635b27-9b86-4573-866f-74163da166b0/volumes" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.935534 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-credential-keys\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.935623 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-config-data\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.935678 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvxbc\" (UniqueName: \"kubernetes.io/projected/a8150cab-aaf2-42f5-8148-ffb124e56569-kube-api-access-dvxbc\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.935705 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-scripts\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.935738 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-public-tls-certs\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.935762 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-combined-ca-bundle\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.935815 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-fernet-keys\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.935851 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-internal-tls-certs\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.959546 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-internal-tls-certs\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.960009 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-scripts\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.960049 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvxbc\" (UniqueName: \"kubernetes.io/projected/a8150cab-aaf2-42f5-8148-ffb124e56569-kube-api-access-dvxbc\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.960228 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-credential-keys\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.960306 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-public-tls-certs\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.960940 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-config-data\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.960958 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-fernet-keys\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:01:59 crc kubenswrapper[4832]: I0131 05:01:59.961375 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8150cab-aaf2-42f5-8148-ffb124e56569-combined-ca-bundle\") pod \"keystone-79bb65dc58-kdbq7\" (UID: \"a8150cab-aaf2-42f5-8148-ffb124e56569\") " pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:02:00 crc kubenswrapper[4832]: I0131 05:02:00.040491 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:02:00 crc kubenswrapper[4832]: I0131 05:02:00.550030 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79bb65dc58-kdbq7"] Jan 31 05:02:00 crc kubenswrapper[4832]: W0131 05:02:00.553707 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8150cab_aaf2_42f5_8148_ffb124e56569.slice/crio-4bf11b0ca0c18bf0eee4139ca14cd7ffa7ee3ab5244adf9edf24609ab3247874 WatchSource:0}: Error finding container 4bf11b0ca0c18bf0eee4139ca14cd7ffa7ee3ab5244adf9edf24609ab3247874: Status 404 returned error can't find the container with id 4bf11b0ca0c18bf0eee4139ca14cd7ffa7ee3ab5244adf9edf24609ab3247874 Jan 31 05:02:00 crc kubenswrapper[4832]: I0131 05:02:00.787597 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79bb65dc58-kdbq7" event={"ID":"a8150cab-aaf2-42f5-8148-ffb124e56569","Type":"ContainerStarted","Data":"4bf11b0ca0c18bf0eee4139ca14cd7ffa7ee3ab5244adf9edf24609ab3247874"} Jan 31 05:02:01 crc kubenswrapper[4832]: I0131 05:02:01.799612 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79bb65dc58-kdbq7" event={"ID":"a8150cab-aaf2-42f5-8148-ffb124e56569","Type":"ContainerStarted","Data":"2af82fb2d904af204a11ce9e7e59507078da5d8edadb388c3d0124d4a6c10084"} Jan 31 05:02:01 crc kubenswrapper[4832]: I0131 05:02:01.801214 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:02:01 crc kubenswrapper[4832]: I0131 05:02:01.833686 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-79bb65dc58-kdbq7" podStartSLOduration=2.83366322 podStartE2EDuration="2.83366322s" podCreationTimestamp="2026-01-31 05:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:02:01.830212992 +0000 UTC m=+1130.779034677" watchObservedRunningTime="2026-01-31 05:02:01.83366322 +0000 UTC m=+1130.782484905" Jan 31 05:02:02 crc kubenswrapper[4832]: I0131 05:02:02.811444 4832 generic.go:334] "Generic (PLEG): container finished" podID="ba1ef32d-8b91-4c1e-b5d1-31a582df6f36" containerID="7b00b61e86023088c49b00534964dafa1ae8cd8796cdf096fe53588e0f7cdd38" exitCode=0 Jan 31 05:02:02 crc kubenswrapper[4832]: I0131 05:02:02.811523 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-z4rhk" event={"ID":"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36","Type":"ContainerDied","Data":"7b00b61e86023088c49b00534964dafa1ae8cd8796cdf096fe53588e0f7cdd38"} Jan 31 05:02:04 crc kubenswrapper[4832]: I0131 05:02:04.247265 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-z4rhk" Jan 31 05:02:04 crc kubenswrapper[4832]: I0131 05:02:04.347831 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-combined-ca-bundle\") pod \"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36\" (UID: \"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36\") " Jan 31 05:02:04 crc kubenswrapper[4832]: I0131 05:02:04.347924 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl9hm\" (UniqueName: \"kubernetes.io/projected/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-kube-api-access-hl9hm\") pod \"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36\" (UID: \"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36\") " Jan 31 05:02:04 crc kubenswrapper[4832]: I0131 05:02:04.347992 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-db-sync-config-data\") pod \"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36\" (UID: \"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36\") " Jan 31 05:02:04 crc kubenswrapper[4832]: I0131 05:02:04.355469 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ba1ef32d-8b91-4c1e-b5d1-31a582df6f36" (UID: "ba1ef32d-8b91-4c1e-b5d1-31a582df6f36"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:04 crc kubenswrapper[4832]: I0131 05:02:04.372759 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-kube-api-access-hl9hm" (OuterVolumeSpecName: "kube-api-access-hl9hm") pod "ba1ef32d-8b91-4c1e-b5d1-31a582df6f36" (UID: "ba1ef32d-8b91-4c1e-b5d1-31a582df6f36"). InnerVolumeSpecName "kube-api-access-hl9hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:04 crc kubenswrapper[4832]: I0131 05:02:04.379412 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba1ef32d-8b91-4c1e-b5d1-31a582df6f36" (UID: "ba1ef32d-8b91-4c1e-b5d1-31a582df6f36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:04 crc kubenswrapper[4832]: I0131 05:02:04.451386 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:04 crc kubenswrapper[4832]: I0131 05:02:04.451435 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl9hm\" (UniqueName: \"kubernetes.io/projected/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-kube-api-access-hl9hm\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:04 crc kubenswrapper[4832]: I0131 05:02:04.451453 4832 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:04 crc kubenswrapper[4832]: I0131 05:02:04.839725 4832 generic.go:334] "Generic (PLEG): container finished" podID="bcc76655-d4cd-47c7-be0c-21e52514fe92" containerID="771972d1658a47aef4a32953aea27c1123597ac57dbd26b26ec77f854939109f" exitCode=0 Jan 31 05:02:04 crc kubenswrapper[4832]: I0131 05:02:04.839829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7r8nc" event={"ID":"bcc76655-d4cd-47c7-be0c-21e52514fe92","Type":"ContainerDied","Data":"771972d1658a47aef4a32953aea27c1123597ac57dbd26b26ec77f854939109f"} Jan 31 05:02:04 crc kubenswrapper[4832]: I0131 05:02:04.845125 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-z4rhk" event={"ID":"ba1ef32d-8b91-4c1e-b5d1-31a582df6f36","Type":"ContainerDied","Data":"ee4fa305932192f8a50cde760945ac74a8727e481f06d4fde33a4baa7277f531"} Jan 31 05:02:04 crc kubenswrapper[4832]: I0131 05:02:04.845170 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee4fa305932192f8a50cde760945ac74a8727e481f06d4fde33a4baa7277f531" Jan 31 05:02:04 crc kubenswrapper[4832]: I0131 05:02:04.845181 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-z4rhk" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.129367 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-8f9d96795-d8rrf"] Jan 31 05:02:05 crc kubenswrapper[4832]: E0131 05:02:05.132979 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1ef32d-8b91-4c1e-b5d1-31a582df6f36" containerName="barbican-db-sync" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.133015 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1ef32d-8b91-4c1e-b5d1-31a582df6f36" containerName="barbican-db-sync" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.133221 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1ef32d-8b91-4c1e-b5d1-31a582df6f36" containerName="barbican-db-sync" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.134224 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.138987 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.139255 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.139365 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fctwn" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.157194 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8f9d96795-d8rrf"] Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.169254 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-c4c47d5bb-m6mqf"] Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.170925 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.173744 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.270783 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6df8e9f4-654c-449b-b5ce-2fb826d6449c-logs\") pod \"barbican-keystone-listener-c4c47d5bb-m6mqf\" (UID: \"6df8e9f4-654c-449b-b5ce-2fb826d6449c\") " pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.270890 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb47q\" (UniqueName: \"kubernetes.io/projected/0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa-kube-api-access-kb47q\") pod \"barbican-worker-8f9d96795-d8rrf\" (UID: \"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa\") " pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.270926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa-combined-ca-bundle\") pod \"barbican-worker-8f9d96795-d8rrf\" (UID: \"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa\") " pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.271007 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6df8e9f4-654c-449b-b5ce-2fb826d6449c-config-data-custom\") pod \"barbican-keystone-listener-c4c47d5bb-m6mqf\" (UID: \"6df8e9f4-654c-449b-b5ce-2fb826d6449c\") " pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.271035 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df8e9f4-654c-449b-b5ce-2fb826d6449c-config-data\") pod \"barbican-keystone-listener-c4c47d5bb-m6mqf\" (UID: \"6df8e9f4-654c-449b-b5ce-2fb826d6449c\") " pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.271058 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa-logs\") pod \"barbican-worker-8f9d96795-d8rrf\" (UID: \"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa\") " pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.271084 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa-config-data\") pod \"barbican-worker-8f9d96795-d8rrf\" (UID: \"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa\") " pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.271118 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa-config-data-custom\") pod \"barbican-worker-8f9d96795-d8rrf\" (UID: \"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa\") " pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.271150 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df8e9f4-654c-449b-b5ce-2fb826d6449c-combined-ca-bundle\") pod \"barbican-keystone-listener-c4c47d5bb-m6mqf\" (UID: \"6df8e9f4-654c-449b-b5ce-2fb826d6449c\") " pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.271188 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c264w\" (UniqueName: \"kubernetes.io/projected/6df8e9f4-654c-449b-b5ce-2fb826d6449c-kube-api-access-c264w\") pod \"barbican-keystone-listener-c4c47d5bb-m6mqf\" (UID: \"6df8e9f4-654c-449b-b5ce-2fb826d6449c\") " pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.285711 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c4c47d5bb-m6mqf"] Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.342646 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-9q8w4"] Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.344292 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.362847 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-9q8w4"] Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.373123 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb47q\" (UniqueName: \"kubernetes.io/projected/0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa-kube-api-access-kb47q\") pod \"barbican-worker-8f9d96795-d8rrf\" (UID: \"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa\") " pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.373177 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa-combined-ca-bundle\") pod \"barbican-worker-8f9d96795-d8rrf\" (UID: \"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa\") " pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.373253 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6df8e9f4-654c-449b-b5ce-2fb826d6449c-config-data-custom\") pod \"barbican-keystone-listener-c4c47d5bb-m6mqf\" (UID: \"6df8e9f4-654c-449b-b5ce-2fb826d6449c\") " pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.373289 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df8e9f4-654c-449b-b5ce-2fb826d6449c-config-data\") pod \"barbican-keystone-listener-c4c47d5bb-m6mqf\" (UID: \"6df8e9f4-654c-449b-b5ce-2fb826d6449c\") " pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.373314 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa-logs\") pod \"barbican-worker-8f9d96795-d8rrf\" (UID: \"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa\") " pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.373345 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa-config-data\") pod \"barbican-worker-8f9d96795-d8rrf\" (UID: \"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa\") " pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.373373 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa-config-data-custom\") pod \"barbican-worker-8f9d96795-d8rrf\" (UID: \"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa\") " pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.373399 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df8e9f4-654c-449b-b5ce-2fb826d6449c-combined-ca-bundle\") pod \"barbican-keystone-listener-c4c47d5bb-m6mqf\" (UID: \"6df8e9f4-654c-449b-b5ce-2fb826d6449c\") " pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.373434 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c264w\" (UniqueName: \"kubernetes.io/projected/6df8e9f4-654c-449b-b5ce-2fb826d6449c-kube-api-access-c264w\") pod \"barbican-keystone-listener-c4c47d5bb-m6mqf\" (UID: \"6df8e9f4-654c-449b-b5ce-2fb826d6449c\") " pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.373464 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6df8e9f4-654c-449b-b5ce-2fb826d6449c-logs\") pod \"barbican-keystone-listener-c4c47d5bb-m6mqf\" (UID: \"6df8e9f4-654c-449b-b5ce-2fb826d6449c\") " pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.374058 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6df8e9f4-654c-449b-b5ce-2fb826d6449c-logs\") pod \"barbican-keystone-listener-c4c47d5bb-m6mqf\" (UID: \"6df8e9f4-654c-449b-b5ce-2fb826d6449c\") " pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.374073 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa-logs\") pod \"barbican-worker-8f9d96795-d8rrf\" (UID: \"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa\") " pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.382188 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa-combined-ca-bundle\") pod \"barbican-worker-8f9d96795-d8rrf\" (UID: \"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa\") " pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.384959 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df8e9f4-654c-449b-b5ce-2fb826d6449c-config-data\") pod \"barbican-keystone-listener-c4c47d5bb-m6mqf\" (UID: \"6df8e9f4-654c-449b-b5ce-2fb826d6449c\") " pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.395910 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa-config-data-custom\") pod \"barbican-worker-8f9d96795-d8rrf\" (UID: \"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa\") " pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.396431 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df8e9f4-654c-449b-b5ce-2fb826d6449c-combined-ca-bundle\") pod \"barbican-keystone-listener-c4c47d5bb-m6mqf\" (UID: \"6df8e9f4-654c-449b-b5ce-2fb826d6449c\") " pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.399464 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa-config-data\") pod \"barbican-worker-8f9d96795-d8rrf\" (UID: \"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa\") " pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.408398 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c264w\" (UniqueName: \"kubernetes.io/projected/6df8e9f4-654c-449b-b5ce-2fb826d6449c-kube-api-access-c264w\") pod \"barbican-keystone-listener-c4c47d5bb-m6mqf\" (UID: \"6df8e9f4-654c-449b-b5ce-2fb826d6449c\") " pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.412201 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb47q\" (UniqueName: \"kubernetes.io/projected/0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa-kube-api-access-kb47q\") pod \"barbican-worker-8f9d96795-d8rrf\" (UID: \"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa\") " pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.414407 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6df8e9f4-654c-449b-b5ce-2fb826d6449c-config-data-custom\") pod \"barbican-keystone-listener-c4c47d5bb-m6mqf\" (UID: \"6df8e9f4-654c-449b-b5ce-2fb826d6449c\") " pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.457663 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8f9d96795-d8rrf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.475625 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.475716 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.475755 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.475817 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-config\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.475908 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.475940 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxnbh\" (UniqueName: \"kubernetes.io/projected/eb9468c0-3318-476e-a3a2-36755c5686d7-kube-api-access-gxnbh\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.492583 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d775c5b5d-5df9r"] Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.494430 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.497084 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.506290 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d775c5b5d-5df9r"] Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.526256 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.581067 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.581129 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmvjs\" (UniqueName: \"kubernetes.io/projected/6f0e1524-0d91-4078-93f4-9c80f630d106-kube-api-access-dmvjs\") pod \"barbican-api-d775c5b5d-5df9r\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.581167 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxnbh\" (UniqueName: \"kubernetes.io/projected/eb9468c0-3318-476e-a3a2-36755c5686d7-kube-api-access-gxnbh\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.581194 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-config-data\") pod \"barbican-api-d775c5b5d-5df9r\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.581252 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.581307 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.581351 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-combined-ca-bundle\") pod \"barbican-api-d775c5b5d-5df9r\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.581370 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.581399 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0e1524-0d91-4078-93f4-9c80f630d106-logs\") pod \"barbican-api-d775c5b5d-5df9r\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.581448 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-config-data-custom\") pod \"barbican-api-d775c5b5d-5df9r\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.581477 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-config\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.583267 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.584072 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-config\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.584208 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.584341 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.584484 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.618523 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxnbh\" (UniqueName: \"kubernetes.io/projected/eb9468c0-3318-476e-a3a2-36755c5686d7-kube-api-access-gxnbh\") pod \"dnsmasq-dns-848cf88cfc-9q8w4\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.676499 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.683103 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-combined-ca-bundle\") pod \"barbican-api-d775c5b5d-5df9r\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.683180 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0e1524-0d91-4078-93f4-9c80f630d106-logs\") pod \"barbican-api-d775c5b5d-5df9r\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.683245 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-config-data-custom\") pod \"barbican-api-d775c5b5d-5df9r\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.683298 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmvjs\" (UniqueName: \"kubernetes.io/projected/6f0e1524-0d91-4078-93f4-9c80f630d106-kube-api-access-dmvjs\") pod \"barbican-api-d775c5b5d-5df9r\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.683341 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-config-data\") pod \"barbican-api-d775c5b5d-5df9r\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.684692 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0e1524-0d91-4078-93f4-9c80f630d106-logs\") pod \"barbican-api-d775c5b5d-5df9r\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.693224 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-combined-ca-bundle\") pod \"barbican-api-d775c5b5d-5df9r\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.693521 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-config-data-custom\") pod \"barbican-api-d775c5b5d-5df9r\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.694018 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-config-data\") pod \"barbican-api-d775c5b5d-5df9r\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.708474 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmvjs\" (UniqueName: \"kubernetes.io/projected/6f0e1524-0d91-4078-93f4-9c80f630d106-kube-api-access-dmvjs\") pod \"barbican-api-d775c5b5d-5df9r\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:05 crc kubenswrapper[4832]: I0131 05:02:05.853298 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:06 crc kubenswrapper[4832]: I0131 05:02:06.104746 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f6b9f547b-mrjcq" podUID="769ea643-f342-413c-a719-7c65e086b9eb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 31 05:02:06 crc kubenswrapper[4832]: I0131 05:02:06.304182 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7fd59dbb48-vjkkx" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.488207 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-797bd69d58-5ff8g"] Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.495851 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.500279 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.500906 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.506728 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-797bd69d58-5ff8g"] Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.565706 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t78lm\" (UniqueName: \"kubernetes.io/projected/becb2819-84d8-4a62-b98f-75e779ad0f56-kube-api-access-t78lm\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.565771 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/becb2819-84d8-4a62-b98f-75e779ad0f56-internal-tls-certs\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.566353 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/becb2819-84d8-4a62-b98f-75e779ad0f56-config-data\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.566514 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becb2819-84d8-4a62-b98f-75e779ad0f56-combined-ca-bundle\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.566596 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/becb2819-84d8-4a62-b98f-75e779ad0f56-config-data-custom\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.566675 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/becb2819-84d8-4a62-b98f-75e779ad0f56-logs\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.566700 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/becb2819-84d8-4a62-b98f-75e779ad0f56-public-tls-certs\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.668776 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/becb2819-84d8-4a62-b98f-75e779ad0f56-config-data\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.668844 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becb2819-84d8-4a62-b98f-75e779ad0f56-combined-ca-bundle\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.668878 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/becb2819-84d8-4a62-b98f-75e779ad0f56-config-data-custom\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.668910 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/becb2819-84d8-4a62-b98f-75e779ad0f56-public-tls-certs\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.668933 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/becb2819-84d8-4a62-b98f-75e779ad0f56-logs\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.668982 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t78lm\" (UniqueName: \"kubernetes.io/projected/becb2819-84d8-4a62-b98f-75e779ad0f56-kube-api-access-t78lm\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.669007 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/becb2819-84d8-4a62-b98f-75e779ad0f56-internal-tls-certs\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.669668 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/becb2819-84d8-4a62-b98f-75e779ad0f56-logs\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.677126 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/becb2819-84d8-4a62-b98f-75e779ad0f56-public-tls-certs\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.677752 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/becb2819-84d8-4a62-b98f-75e779ad0f56-internal-tls-certs\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.677820 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/becb2819-84d8-4a62-b98f-75e779ad0f56-config-data-custom\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.678717 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/becb2819-84d8-4a62-b98f-75e779ad0f56-config-data\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.688509 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becb2819-84d8-4a62-b98f-75e779ad0f56-combined-ca-bundle\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.696795 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t78lm\" (UniqueName: \"kubernetes.io/projected/becb2819-84d8-4a62-b98f-75e779ad0f56-kube-api-access-t78lm\") pod \"barbican-api-797bd69d58-5ff8g\" (UID: \"becb2819-84d8-4a62-b98f-75e779ad0f56\") " pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:08 crc kubenswrapper[4832]: I0131 05:02:08.823447 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:09 crc kubenswrapper[4832]: I0131 05:02:09.833283 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.008412 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsdgj\" (UniqueName: \"kubernetes.io/projected/bcc76655-d4cd-47c7-be0c-21e52514fe92-kube-api-access-jsdgj\") pod \"bcc76655-d4cd-47c7-be0c-21e52514fe92\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.008483 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-config-data\") pod \"bcc76655-d4cd-47c7-be0c-21e52514fe92\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.008597 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bcc76655-d4cd-47c7-be0c-21e52514fe92-etc-machine-id\") pod \"bcc76655-d4cd-47c7-be0c-21e52514fe92\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.008646 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-scripts\") pod \"bcc76655-d4cd-47c7-be0c-21e52514fe92\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.008740 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-db-sync-config-data\") pod \"bcc76655-d4cd-47c7-be0c-21e52514fe92\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.008788 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-combined-ca-bundle\") pod \"bcc76655-d4cd-47c7-be0c-21e52514fe92\" (UID: \"bcc76655-d4cd-47c7-be0c-21e52514fe92\") " Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.011802 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcc76655-d4cd-47c7-be0c-21e52514fe92-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bcc76655-d4cd-47c7-be0c-21e52514fe92" (UID: "bcc76655-d4cd-47c7-be0c-21e52514fe92"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.017053 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-scripts" (OuterVolumeSpecName: "scripts") pod "bcc76655-d4cd-47c7-be0c-21e52514fe92" (UID: "bcc76655-d4cd-47c7-be0c-21e52514fe92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.025778 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc76655-d4cd-47c7-be0c-21e52514fe92-kube-api-access-jsdgj" (OuterVolumeSpecName: "kube-api-access-jsdgj") pod "bcc76655-d4cd-47c7-be0c-21e52514fe92" (UID: "bcc76655-d4cd-47c7-be0c-21e52514fe92"). InnerVolumeSpecName "kube-api-access-jsdgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.036673 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bcc76655-d4cd-47c7-be0c-21e52514fe92" (UID: "bcc76655-d4cd-47c7-be0c-21e52514fe92"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.079356 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7r8nc" event={"ID":"bcc76655-d4cd-47c7-be0c-21e52514fe92","Type":"ContainerDied","Data":"793fb11ab0e33e8b5dcdcd8e8ae68b5062c86f627919e7e5c8ae97483648f432"} Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.079427 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="793fb11ab0e33e8b5dcdcd8e8ae68b5062c86f627919e7e5c8ae97483648f432" Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.079517 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7r8nc" Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.091775 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-config-data" (OuterVolumeSpecName: "config-data") pod "bcc76655-d4cd-47c7-be0c-21e52514fe92" (UID: "bcc76655-d4cd-47c7-be0c-21e52514fe92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.111903 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.111942 4832 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.111955 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsdgj\" (UniqueName: \"kubernetes.io/projected/bcc76655-d4cd-47c7-be0c-21e52514fe92-kube-api-access-jsdgj\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.111963 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.111972 4832 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bcc76655-d4cd-47c7-be0c-21e52514fe92-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.132735 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcc76655-d4cd-47c7-be0c-21e52514fe92" (UID: "bcc76655-d4cd-47c7-be0c-21e52514fe92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:10 crc kubenswrapper[4832]: I0131 05:02:10.213622 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcc76655-d4cd-47c7-be0c-21e52514fe92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.150349 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:02:11 crc kubenswrapper[4832]: E0131 05:02:11.151366 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc76655-d4cd-47c7-be0c-21e52514fe92" containerName="cinder-db-sync" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.151388 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc76655-d4cd-47c7-be0c-21e52514fe92" containerName="cinder-db-sync" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.151647 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcc76655-d4cd-47c7-be0c-21e52514fe92" containerName="cinder-db-sync" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.153337 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.158635 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.158976 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.159166 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.159355 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wpsh4" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.185889 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.251248 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-config-data\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.251318 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.251363 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee665057-9128-472d-ad63-7701cfa381e8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.251448 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-scripts\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.251522 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzn6q\" (UniqueName: \"kubernetes.io/projected/ee665057-9128-472d-ad63-7701cfa381e8-kube-api-access-fzn6q\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.251581 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.261722 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-9q8w4"] Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.296991 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-b2hqg"] Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.299781 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.316745 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-b2hqg"] Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.355073 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-scripts\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.355218 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzn6q\" (UniqueName: \"kubernetes.io/projected/ee665057-9128-472d-ad63-7701cfa381e8-kube-api-access-fzn6q\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.355272 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.355331 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-config-data\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.355383 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.355441 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee665057-9128-472d-ad63-7701cfa381e8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.355680 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee665057-9128-472d-ad63-7701cfa381e8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.370016 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.370053 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.387195 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-scripts\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.387714 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-config-data\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.399338 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzn6q\" (UniqueName: \"kubernetes.io/projected/ee665057-9128-472d-ad63-7701cfa381e8-kube-api-access-fzn6q\") pod \"cinder-scheduler-0\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.459316 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.459429 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-dns-svc\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.459742 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.459810 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-config\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.460351 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x24k6\" (UniqueName: \"kubernetes.io/projected/8e019ace-3599-4661-8577-79ecf77e6011-kube-api-access-x24k6\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.460417 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.499063 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.522214 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.524082 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.530257 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.536610 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.561974 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-dns-svc\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.562066 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.562089 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-config\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.562157 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x24k6\" (UniqueName: \"kubernetes.io/projected/8e019ace-3599-4661-8577-79ecf77e6011-kube-api-access-x24k6\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.562183 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.562235 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.563357 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.563668 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-config\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.564415 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-dns-svc\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.566830 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.567509 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.610826 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x24k6\" (UniqueName: \"kubernetes.io/projected/8e019ace-3599-4661-8577-79ecf77e6011-kube-api-access-x24k6\") pod \"dnsmasq-dns-6578955fd5-b2hqg\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.631035 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.666293 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-scripts\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.666456 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.666646 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-config-data-custom\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.666750 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-config-data\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.666823 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24g7r\" (UniqueName: \"kubernetes.io/projected/27f38d5e-d23c-48b9-9f07-855c990b3a78-kube-api-access-24g7r\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.666974 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27f38d5e-d23c-48b9-9f07-855c990b3a78-logs\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.667127 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27f38d5e-d23c-48b9-9f07-855c990b3a78-etc-machine-id\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.769370 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-config-data\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.769434 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24g7r\" (UniqueName: \"kubernetes.io/projected/27f38d5e-d23c-48b9-9f07-855c990b3a78-kube-api-access-24g7r\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.769484 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27f38d5e-d23c-48b9-9f07-855c990b3a78-logs\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.769522 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27f38d5e-d23c-48b9-9f07-855c990b3a78-etc-machine-id\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.769713 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-scripts\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.769734 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.769791 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-config-data-custom\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.770325 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27f38d5e-d23c-48b9-9f07-855c990b3a78-logs\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.770347 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27f38d5e-d23c-48b9-9f07-855c990b3a78-etc-machine-id\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.774159 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-scripts\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.774880 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-config-data-custom\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.775365 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-config-data\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.776491 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.796250 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24g7r\" (UniqueName: \"kubernetes.io/projected/27f38d5e-d23c-48b9-9f07-855c990b3a78-kube-api-access-24g7r\") pod \"cinder-api-0\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " pod="openstack/cinder-api-0" Jan 31 05:02:11 crc kubenswrapper[4832]: I0131 05:02:11.868515 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 05:02:12 crc kubenswrapper[4832]: I0131 05:02:12.562029 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-9q8w4"] Jan 31 05:02:12 crc kubenswrapper[4832]: E0131 05:02:12.565503 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="46b0d335-0c75-4996-be63-bd416e988ced" Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.126717 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.137045 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-b2hqg"] Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.176751 4832 generic.go:334] "Generic (PLEG): container finished" podID="eb9468c0-3318-476e-a3a2-36755c5686d7" containerID="e0c905f7302644402e90de629e38007db4ac1a61a6f3680de0c47bf2ca32a34b" exitCode=0 Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.177042 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" event={"ID":"eb9468c0-3318-476e-a3a2-36755c5686d7","Type":"ContainerDied","Data":"e0c905f7302644402e90de629e38007db4ac1a61a6f3680de0c47bf2ca32a34b"} Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.177099 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" event={"ID":"eb9468c0-3318-476e-a3a2-36755c5686d7","Type":"ContainerStarted","Data":"c9508284c33d915fc6467d287daed0f7bf9af2bf69c3ef1b4ce92f5a3a848a0c"} Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.192002 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46b0d335-0c75-4996-be63-bd416e988ced","Type":"ContainerStarted","Data":"69cae75e18439a11f316401ca8e3e3a04cfca9ea718fc95cadd71906b69053e5"} Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.192302 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46b0d335-0c75-4996-be63-bd416e988ced" containerName="proxy-httpd" containerID="cri-o://69cae75e18439a11f316401ca8e3e3a04cfca9ea718fc95cadd71906b69053e5" gracePeriod=30 Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.193100 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46b0d335-0c75-4996-be63-bd416e988ced" containerName="ceilometer-notification-agent" containerID="cri-o://346c6915e52f127ba3d2807fea4a516016f6e64ba1c3cdded6c8370c6acfc764" gracePeriod=30 Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.197861 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46b0d335-0c75-4996-be63-bd416e988ced" containerName="sg-core" containerID="cri-o://83255fcc723cb51bd88d4f3864c113117b61861f708cfc8fb92e75bd63476553" gracePeriod=30 Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.202987 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d775c5b5d-5df9r"] Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.408140 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c4c47d5bb-m6mqf"] Jan 31 05:02:13 crc kubenswrapper[4832]: W0131 05:02:13.422110 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6df8e9f4_654c_449b_b5ce_2fb826d6449c.slice/crio-8dd9ed0bcb1ffd889ee5b9cb9949232fa8e7c78210b503103d86c1888e9feba9 WatchSource:0}: Error finding container 8dd9ed0bcb1ffd889ee5b9cb9949232fa8e7c78210b503103d86c1888e9feba9: Status 404 returned error can't find the container with id 8dd9ed0bcb1ffd889ee5b9cb9949232fa8e7c78210b503103d86c1888e9feba9 Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.433938 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.458766 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8f9d96795-d8rrf"] Jan 31 05:02:13 crc kubenswrapper[4832]: W0131 05:02:13.468989 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d4cc16b_16ab_4f2e_9d54_e9dc3b40d9fa.slice/crio-e50fb877c4d070133f3cae189cfa16a0f6ab3a7a63e2b2dfc6cca35ac081975d WatchSource:0}: Error finding container e50fb877c4d070133f3cae189cfa16a0f6ab3a7a63e2b2dfc6cca35ac081975d: Status 404 returned error can't find the container with id e50fb877c4d070133f3cae189cfa16a0f6ab3a7a63e2b2dfc6cca35ac081975d Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.477623 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-797bd69d58-5ff8g"] Jan 31 05:02:13 crc kubenswrapper[4832]: W0131 05:02:13.484405 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbecb2819_84d8_4a62_b98f_75e779ad0f56.slice/crio-be10d73e79e8dbd60da5b9e71abc0e4545139aad7cf7790b89845da59c7bd95f WatchSource:0}: Error finding container be10d73e79e8dbd60da5b9e71abc0e4545139aad7cf7790b89845da59c7bd95f: Status 404 returned error can't find the container with id be10d73e79e8dbd60da5b9e71abc0e4545139aad7cf7790b89845da59c7bd95f Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.490518 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:02:13 crc kubenswrapper[4832]: W0131 05:02:13.495140 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27f38d5e_d23c_48b9_9f07_855c990b3a78.slice/crio-190121e69279274448d60ac911eee6a417d6ac4856f86890a8a23f19e1b9bdea WatchSource:0}: Error finding container 190121e69279274448d60ac911eee6a417d6ac4856f86890a8a23f19e1b9bdea: Status 404 returned error can't find the container with id 190121e69279274448d60ac911eee6a417d6ac4856f86890a8a23f19e1b9bdea Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.709666 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.842904 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxnbh\" (UniqueName: \"kubernetes.io/projected/eb9468c0-3318-476e-a3a2-36755c5686d7-kube-api-access-gxnbh\") pod \"eb9468c0-3318-476e-a3a2-36755c5686d7\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.843375 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-config\") pod \"eb9468c0-3318-476e-a3a2-36755c5686d7\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.843511 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-dns-svc\") pod \"eb9468c0-3318-476e-a3a2-36755c5686d7\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.843607 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-dns-swift-storage-0\") pod \"eb9468c0-3318-476e-a3a2-36755c5686d7\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.843859 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-ovsdbserver-sb\") pod \"eb9468c0-3318-476e-a3a2-36755c5686d7\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.843966 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-ovsdbserver-nb\") pod \"eb9468c0-3318-476e-a3a2-36755c5686d7\" (UID: \"eb9468c0-3318-476e-a3a2-36755c5686d7\") " Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.850873 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9468c0-3318-476e-a3a2-36755c5686d7-kube-api-access-gxnbh" (OuterVolumeSpecName: "kube-api-access-gxnbh") pod "eb9468c0-3318-476e-a3a2-36755c5686d7" (UID: "eb9468c0-3318-476e-a3a2-36755c5686d7"). InnerVolumeSpecName "kube-api-access-gxnbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.870623 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb9468c0-3318-476e-a3a2-36755c5686d7" (UID: "eb9468c0-3318-476e-a3a2-36755c5686d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.875897 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb9468c0-3318-476e-a3a2-36755c5686d7" (UID: "eb9468c0-3318-476e-a3a2-36755c5686d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.878979 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eb9468c0-3318-476e-a3a2-36755c5686d7" (UID: "eb9468c0-3318-476e-a3a2-36755c5686d7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.893532 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb9468c0-3318-476e-a3a2-36755c5686d7" (UID: "eb9468c0-3318-476e-a3a2-36755c5686d7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.895083 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-config" (OuterVolumeSpecName: "config") pod "eb9468c0-3318-476e-a3a2-36755c5686d7" (UID: "eb9468c0-3318-476e-a3a2-36755c5686d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.947027 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxnbh\" (UniqueName: \"kubernetes.io/projected/eb9468c0-3318-476e-a3a2-36755c5686d7-kube-api-access-gxnbh\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.947114 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.947125 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.947134 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.947142 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:13 crc kubenswrapper[4832]: I0131 05:02:13.947152 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb9468c0-3318-476e-a3a2-36755c5686d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.204762 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" event={"ID":"6df8e9f4-654c-449b-b5ce-2fb826d6449c","Type":"ContainerStarted","Data":"8dd9ed0bcb1ffd889ee5b9cb9949232fa8e7c78210b503103d86c1888e9feba9"} Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.206666 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"27f38d5e-d23c-48b9-9f07-855c990b3a78","Type":"ContainerStarted","Data":"190121e69279274448d60ac911eee6a417d6ac4856f86890a8a23f19e1b9bdea"} Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.209056 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d775c5b5d-5df9r" event={"ID":"6f0e1524-0d91-4078-93f4-9c80f630d106","Type":"ContainerStarted","Data":"a3f0098f01cd9efd61d11f50e7f0d883ca02df7cb3f7e9eff6bc7a6171800fd4"} Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.209106 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d775c5b5d-5df9r" event={"ID":"6f0e1524-0d91-4078-93f4-9c80f630d106","Type":"ContainerStarted","Data":"53b347cfd4d74f353c7aa1ec27042440313f790bec4c35668d351fd6187862ed"} Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.209117 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d775c5b5d-5df9r" event={"ID":"6f0e1524-0d91-4078-93f4-9c80f630d106","Type":"ContainerStarted","Data":"6ddc2c6c7d133540e03d8ee3a26b1b7b16544ccaf97f327498172dab16ef6f00"} Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.209179 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.209204 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.211037 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8f9d96795-d8rrf" event={"ID":"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa","Type":"ContainerStarted","Data":"e50fb877c4d070133f3cae189cfa16a0f6ab3a7a63e2b2dfc6cca35ac081975d"} Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.217259 4832 generic.go:334] "Generic (PLEG): container finished" podID="46b0d335-0c75-4996-be63-bd416e988ced" containerID="69cae75e18439a11f316401ca8e3e3a04cfca9ea718fc95cadd71906b69053e5" exitCode=0 Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.217297 4832 generic.go:334] "Generic (PLEG): container finished" podID="46b0d335-0c75-4996-be63-bd416e988ced" containerID="83255fcc723cb51bd88d4f3864c113117b61861f708cfc8fb92e75bd63476553" exitCode=2 Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.217347 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46b0d335-0c75-4996-be63-bd416e988ced","Type":"ContainerDied","Data":"69cae75e18439a11f316401ca8e3e3a04cfca9ea718fc95cadd71906b69053e5"} Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.217376 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46b0d335-0c75-4996-be63-bd416e988ced","Type":"ContainerDied","Data":"83255fcc723cb51bd88d4f3864c113117b61861f708cfc8fb92e75bd63476553"} Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.220524 4832 generic.go:334] "Generic (PLEG): container finished" podID="8e019ace-3599-4661-8577-79ecf77e6011" containerID="ed864b01bb51c54f506f910b455165a7f517f157e9bccb96f729b2354e1e9ede" exitCode=0 Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.220598 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" event={"ID":"8e019ace-3599-4661-8577-79ecf77e6011","Type":"ContainerDied","Data":"ed864b01bb51c54f506f910b455165a7f517f157e9bccb96f729b2354e1e9ede"} Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.220616 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" event={"ID":"8e019ace-3599-4661-8577-79ecf77e6011","Type":"ContainerStarted","Data":"c8489cd81e77a42293fa97ed8bcf541a39402dec56f5b350f35755b32102027a"} Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.224554 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ee665057-9128-472d-ad63-7701cfa381e8","Type":"ContainerStarted","Data":"7ee237afb67c4ac84f2994196ae2d20a859a733f39e2fcf79f1b2c7cdadc2e71"} Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.233097 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.235226 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-9q8w4" event={"ID":"eb9468c0-3318-476e-a3a2-36755c5686d7","Type":"ContainerDied","Data":"c9508284c33d915fc6467d287daed0f7bf9af2bf69c3ef1b4ce92f5a3a848a0c"} Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.236044 4832 scope.go:117] "RemoveContainer" containerID="e0c905f7302644402e90de629e38007db4ac1a61a6f3680de0c47bf2ca32a34b" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.246851 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-797bd69d58-5ff8g" event={"ID":"becb2819-84d8-4a62-b98f-75e779ad0f56","Type":"ContainerStarted","Data":"30ccd77c4d6807ff5810c055328b02fdb073b4404c0c4b41ae3e14da76ddb701"} Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.246925 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-797bd69d58-5ff8g" event={"ID":"becb2819-84d8-4a62-b98f-75e779ad0f56","Type":"ContainerStarted","Data":"be10d73e79e8dbd60da5b9e71abc0e4545139aad7cf7790b89845da59c7bd95f"} Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.285805 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d775c5b5d-5df9r" podStartSLOduration=9.285765498 podStartE2EDuration="9.285765498s" podCreationTimestamp="2026-01-31 05:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:02:14.231656035 +0000 UTC m=+1143.180477720" watchObservedRunningTime="2026-01-31 05:02:14.285765498 +0000 UTC m=+1143.234587193" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.356726 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-9q8w4"] Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.380586 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.389218 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-9q8w4"] Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.642809 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84fb4958f7-pczxt"] Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.645739 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84fb4958f7-pczxt" podUID="c7ba646e-a7ee-4e60-8396-5a9eb0590378" containerName="neutron-httpd" containerID="cri-o://12ac270587f81dc1ae2cb82016d79dc5ae88f9b329d9cb2f837f4b4f3f886a56" gracePeriod=30 Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.643602 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84fb4958f7-pczxt" podUID="c7ba646e-a7ee-4e60-8396-5a9eb0590378" containerName="neutron-api" containerID="cri-o://334190742e1382c51d3e3a093dc83cc88edc3155581a81e4ffe60739732f9b6a" gracePeriod=30 Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.685093 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c7c54d8bf-w9s7x"] Jan 31 05:02:14 crc kubenswrapper[4832]: E0131 05:02:14.685591 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9468c0-3318-476e-a3a2-36755c5686d7" containerName="init" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.685610 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9468c0-3318-476e-a3a2-36755c5686d7" containerName="init" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.685838 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9468c0-3318-476e-a3a2-36755c5686d7" containerName="init" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.687871 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.692037 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-84fb4958f7-pczxt" podUID="c7ba646e-a7ee-4e60-8396-5a9eb0590378" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.157:9696/\": EOF" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.726815 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c7c54d8bf-w9s7x"] Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.792595 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-public-tls-certs\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.792645 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-config\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.792678 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrccj\" (UniqueName: \"kubernetes.io/projected/9192a7c5-49bb-4fed-858e-0c14b96f1288-kube-api-access-rrccj\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.792744 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-ovndb-tls-certs\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.792767 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-combined-ca-bundle\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.792797 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-httpd-config\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.792819 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-internal-tls-certs\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.899842 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-public-tls-certs\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.900050 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-config\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.900392 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrccj\" (UniqueName: \"kubernetes.io/projected/9192a7c5-49bb-4fed-858e-0c14b96f1288-kube-api-access-rrccj\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.900654 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-ovndb-tls-certs\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.900762 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-combined-ca-bundle\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.900842 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-httpd-config\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.900930 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-internal-tls-certs\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.907791 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-public-tls-certs\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.916477 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-combined-ca-bundle\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.919452 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-httpd-config\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.928175 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-internal-tls-certs\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.928582 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-ovndb-tls-certs\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.930312 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9192a7c5-49bb-4fed-858e-0c14b96f1288-config\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:14 crc kubenswrapper[4832]: I0131 05:02:14.931438 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrccj\" (UniqueName: \"kubernetes.io/projected/9192a7c5-49bb-4fed-858e-0c14b96f1288-kube-api-access-rrccj\") pod \"neutron-7c7c54d8bf-w9s7x\" (UID: \"9192a7c5-49bb-4fed-858e-0c14b96f1288\") " pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.033355 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.264681 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-797bd69d58-5ff8g" event={"ID":"becb2819-84d8-4a62-b98f-75e779ad0f56","Type":"ContainerStarted","Data":"6b1729afef0070be676677a1d18d3107985ea806f2ff53a76c7da4df20828d86"} Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.265311 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.265340 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.269317 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"27f38d5e-d23c-48b9-9f07-855c990b3a78","Type":"ContainerStarted","Data":"c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f"} Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.273652 4832 generic.go:334] "Generic (PLEG): container finished" podID="c7ba646e-a7ee-4e60-8396-5a9eb0590378" containerID="12ac270587f81dc1ae2cb82016d79dc5ae88f9b329d9cb2f837f4b4f3f886a56" exitCode=0 Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.273792 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84fb4958f7-pczxt" event={"ID":"c7ba646e-a7ee-4e60-8396-5a9eb0590378","Type":"ContainerDied","Data":"12ac270587f81dc1ae2cb82016d79dc5ae88f9b329d9cb2f837f4b4f3f886a56"} Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.283863 4832 generic.go:334] "Generic (PLEG): container finished" podID="9e522509-0496-4e05-b5d0-935f5ef2fc75" containerID="573200e795413f77253a85b8d18b678fc53056ba1c11752bc41e464f70e904df" exitCode=137 Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.283912 4832 generic.go:334] "Generic (PLEG): container finished" podID="9e522509-0496-4e05-b5d0-935f5ef2fc75" containerID="69a72e6374e6eb3698c30923b5814539f859cafba7da779ab7075fe752b7fe70" exitCode=137 Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.283975 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b849fc549-b2htl" event={"ID":"9e522509-0496-4e05-b5d0-935f5ef2fc75","Type":"ContainerDied","Data":"573200e795413f77253a85b8d18b678fc53056ba1c11752bc41e464f70e904df"} Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.284016 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b849fc549-b2htl" event={"ID":"9e522509-0496-4e05-b5d0-935f5ef2fc75","Type":"ContainerDied","Data":"69a72e6374e6eb3698c30923b5814539f859cafba7da779ab7075fe752b7fe70"} Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.318984 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" event={"ID":"8e019ace-3599-4661-8577-79ecf77e6011","Type":"ContainerStarted","Data":"74236e7651c1bfd64f131e69e7ec9c066ebaa8561368d8a2ee83ba2861c7be7b"} Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.319181 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.327364 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ee665057-9128-472d-ad63-7701cfa381e8","Type":"ContainerStarted","Data":"f8c8c4b681c47f1cda4fa155c98d5c7959ad258128a5e5c677b76ee1910146f1"} Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.346285 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-797bd69d58-5ff8g" podStartSLOduration=7.346262207 podStartE2EDuration="7.346262207s" podCreationTimestamp="2026-01-31 05:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:02:15.303547859 +0000 UTC m=+1144.252369544" watchObservedRunningTime="2026-01-31 05:02:15.346262207 +0000 UTC m=+1144.295083882" Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.348282 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" podStartSLOduration=4.34827448 podStartE2EDuration="4.34827448s" podCreationTimestamp="2026-01-31 05:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:02:15.340133926 +0000 UTC m=+1144.288955611" watchObservedRunningTime="2026-01-31 05:02:15.34827448 +0000 UTC m=+1144.297096165" Jan 31 05:02:15 crc kubenswrapper[4832]: I0131 05:02:15.875053 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb9468c0-3318-476e-a3a2-36755c5686d7" path="/var/lib/kubelet/pods/eb9468c0-3318-476e-a3a2-36755c5686d7/volumes" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.273635 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.349337 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b849fc549-b2htl" event={"ID":"9e522509-0496-4e05-b5d0-935f5ef2fc75","Type":"ContainerDied","Data":"a6663683fa134f702993a86f0fbc6cfcce1904b18cee25d74061e75b85854799"} Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.349858 4832 scope.go:117] "RemoveContainer" containerID="573200e795413f77253a85b8d18b678fc53056ba1c11752bc41e464f70e904df" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.350085 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b849fc549-b2htl" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.366998 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="27f38d5e-d23c-48b9-9f07-855c990b3a78" containerName="cinder-api-log" containerID="cri-o://c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f" gracePeriod=30 Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.367398 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"27f38d5e-d23c-48b9-9f07-855c990b3a78","Type":"ContainerStarted","Data":"c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38"} Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.368368 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.368670 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="27f38d5e-d23c-48b9-9f07-855c990b3a78" containerName="cinder-api" containerID="cri-o://c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38" gracePeriod=30 Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.403609 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.403524486 podStartE2EDuration="5.403524486s" podCreationTimestamp="2026-01-31 05:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:02:16.396992562 +0000 UTC m=+1145.345814247" watchObservedRunningTime="2026-01-31 05:02:16.403524486 +0000 UTC m=+1145.352346171" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.435392 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e522509-0496-4e05-b5d0-935f5ef2fc75-horizon-secret-key\") pod \"9e522509-0496-4e05-b5d0-935f5ef2fc75\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.435460 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgjcs\" (UniqueName: \"kubernetes.io/projected/9e522509-0496-4e05-b5d0-935f5ef2fc75-kube-api-access-qgjcs\") pod \"9e522509-0496-4e05-b5d0-935f5ef2fc75\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.435698 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e522509-0496-4e05-b5d0-935f5ef2fc75-scripts\") pod \"9e522509-0496-4e05-b5d0-935f5ef2fc75\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.435790 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e522509-0496-4e05-b5d0-935f5ef2fc75-config-data\") pod \"9e522509-0496-4e05-b5d0-935f5ef2fc75\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.435854 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e522509-0496-4e05-b5d0-935f5ef2fc75-logs\") pod \"9e522509-0496-4e05-b5d0-935f5ef2fc75\" (UID: \"9e522509-0496-4e05-b5d0-935f5ef2fc75\") " Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.447406 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e522509-0496-4e05-b5d0-935f5ef2fc75-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9e522509-0496-4e05-b5d0-935f5ef2fc75" (UID: "9e522509-0496-4e05-b5d0-935f5ef2fc75"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.452704 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e522509-0496-4e05-b5d0-935f5ef2fc75-kube-api-access-qgjcs" (OuterVolumeSpecName: "kube-api-access-qgjcs") pod "9e522509-0496-4e05-b5d0-935f5ef2fc75" (UID: "9e522509-0496-4e05-b5d0-935f5ef2fc75"). InnerVolumeSpecName "kube-api-access-qgjcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.452895 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e522509-0496-4e05-b5d0-935f5ef2fc75-logs" (OuterVolumeSpecName: "logs") pod "9e522509-0496-4e05-b5d0-935f5ef2fc75" (UID: "9e522509-0496-4e05-b5d0-935f5ef2fc75"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.465922 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e522509-0496-4e05-b5d0-935f5ef2fc75-scripts" (OuterVolumeSpecName: "scripts") pod "9e522509-0496-4e05-b5d0-935f5ef2fc75" (UID: "9e522509-0496-4e05-b5d0-935f5ef2fc75"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.495952 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e522509-0496-4e05-b5d0-935f5ef2fc75-config-data" (OuterVolumeSpecName: "config-data") pod "9e522509-0496-4e05-b5d0-935f5ef2fc75" (UID: "9e522509-0496-4e05-b5d0-935f5ef2fc75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.556833 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e522509-0496-4e05-b5d0-935f5ef2fc75-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.556867 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgjcs\" (UniqueName: \"kubernetes.io/projected/9e522509-0496-4e05-b5d0-935f5ef2fc75-kube-api-access-qgjcs\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.556880 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e522509-0496-4e05-b5d0-935f5ef2fc75-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.556890 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e522509-0496-4e05-b5d0-935f5ef2fc75-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.556900 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e522509-0496-4e05-b5d0-935f5ef2fc75-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.678813 4832 scope.go:117] "RemoveContainer" containerID="69a72e6374e6eb3698c30923b5814539f859cafba7da779ab7075fe752b7fe70" Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.729178 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b849fc549-b2htl"] Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.750005 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-b849fc549-b2htl"] Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.763275 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c7c54d8bf-w9s7x"] Jan 31 05:02:16 crc kubenswrapper[4832]: I0131 05:02:16.851825 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-84fb4958f7-pczxt" podUID="c7ba646e-a7ee-4e60-8396-5a9eb0590378" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.157:9696/\": dial tcp 10.217.0.157:9696: connect: connection refused" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.309279 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.390877 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" event={"ID":"6df8e9f4-654c-449b-b5ce-2fb826d6449c","Type":"ContainerStarted","Data":"5aed3f128a1ff125b93a6b85da561d0f8279e26508d6c9023c7326283d51cc90"} Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.403990 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-combined-ca-bundle\") pod \"27f38d5e-d23c-48b9-9f07-855c990b3a78\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.404111 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27f38d5e-d23c-48b9-9f07-855c990b3a78-etc-machine-id\") pod \"27f38d5e-d23c-48b9-9f07-855c990b3a78\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.404192 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27f38d5e-d23c-48b9-9f07-855c990b3a78-logs\") pod \"27f38d5e-d23c-48b9-9f07-855c990b3a78\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.404222 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-scripts\") pod \"27f38d5e-d23c-48b9-9f07-855c990b3a78\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.404257 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24g7r\" (UniqueName: \"kubernetes.io/projected/27f38d5e-d23c-48b9-9f07-855c990b3a78-kube-api-access-24g7r\") pod \"27f38d5e-d23c-48b9-9f07-855c990b3a78\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.404294 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-config-data-custom\") pod \"27f38d5e-d23c-48b9-9f07-855c990b3a78\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.404320 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-config-data\") pod \"27f38d5e-d23c-48b9-9f07-855c990b3a78\" (UID: \"27f38d5e-d23c-48b9-9f07-855c990b3a78\") " Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.405219 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27f38d5e-d23c-48b9-9f07-855c990b3a78-logs" (OuterVolumeSpecName: "logs") pod "27f38d5e-d23c-48b9-9f07-855c990b3a78" (UID: "27f38d5e-d23c-48b9-9f07-855c990b3a78"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.406475 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27f38d5e-d23c-48b9-9f07-855c990b3a78-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "27f38d5e-d23c-48b9-9f07-855c990b3a78" (UID: "27f38d5e-d23c-48b9-9f07-855c990b3a78"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.413187 4832 generic.go:334] "Generic (PLEG): container finished" podID="27f38d5e-d23c-48b9-9f07-855c990b3a78" containerID="c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38" exitCode=0 Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.413270 4832 generic.go:334] "Generic (PLEG): container finished" podID="27f38d5e-d23c-48b9-9f07-855c990b3a78" containerID="c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f" exitCode=143 Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.413335 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"27f38d5e-d23c-48b9-9f07-855c990b3a78","Type":"ContainerDied","Data":"c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38"} Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.413382 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"27f38d5e-d23c-48b9-9f07-855c990b3a78","Type":"ContainerDied","Data":"c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f"} Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.413400 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"27f38d5e-d23c-48b9-9f07-855c990b3a78","Type":"ContainerDied","Data":"190121e69279274448d60ac911eee6a417d6ac4856f86890a8a23f19e1b9bdea"} Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.413422 4832 scope.go:117] "RemoveContainer" containerID="c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.413468 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-scripts" (OuterVolumeSpecName: "scripts") pod "27f38d5e-d23c-48b9-9f07-855c990b3a78" (UID: "27f38d5e-d23c-48b9-9f07-855c990b3a78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.413632 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.419152 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f38d5e-d23c-48b9-9f07-855c990b3a78-kube-api-access-24g7r" (OuterVolumeSpecName: "kube-api-access-24g7r") pod "27f38d5e-d23c-48b9-9f07-855c990b3a78" (UID: "27f38d5e-d23c-48b9-9f07-855c990b3a78"). InnerVolumeSpecName "kube-api-access-24g7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.431878 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "27f38d5e-d23c-48b9-9f07-855c990b3a78" (UID: "27f38d5e-d23c-48b9-9f07-855c990b3a78"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.440039 4832 scope.go:117] "RemoveContainer" containerID="c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.440489 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7c54d8bf-w9s7x" event={"ID":"9192a7c5-49bb-4fed-858e-0c14b96f1288","Type":"ContainerStarted","Data":"2cb846c2ef454ad7ce6a33fa81b732d1987c19bc05ec627eeae65f349cf089b8"} Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.451431 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8f9d96795-d8rrf" event={"ID":"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa","Type":"ContainerStarted","Data":"4fdea0bfcb3120fb563a0ce7751853ddd1f1581ffac2c3331eda0273ba674c44"} Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.451489 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8f9d96795-d8rrf" event={"ID":"0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa","Type":"ContainerStarted","Data":"9092d40220ab485c82e19c8ea2d56b63c0f60d0d2dbfdc0ce8c5a966fa5e1807"} Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.451732 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27f38d5e-d23c-48b9-9f07-855c990b3a78" (UID: "27f38d5e-d23c-48b9-9f07-855c990b3a78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.481475 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-8f9d96795-d8rrf" podStartSLOduration=9.954420622 podStartE2EDuration="12.480976452s" podCreationTimestamp="2026-01-31 05:02:05 +0000 UTC" firstStartedPulling="2026-01-31 05:02:13.473178229 +0000 UTC m=+1142.421999914" lastFinishedPulling="2026-01-31 05:02:15.999734059 +0000 UTC m=+1144.948555744" observedRunningTime="2026-01-31 05:02:17.475536273 +0000 UTC m=+1146.424357978" watchObservedRunningTime="2026-01-31 05:02:17.480976452 +0000 UTC m=+1146.429798147" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.486217 4832 scope.go:117] "RemoveContainer" containerID="c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38" Jan 31 05:02:17 crc kubenswrapper[4832]: E0131 05:02:17.486880 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38\": container with ID starting with c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38 not found: ID does not exist" containerID="c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.486919 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38"} err="failed to get container status \"c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38\": rpc error: code = NotFound desc = could not find container \"c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38\": container with ID starting with c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38 not found: ID does not exist" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.486946 4832 scope.go:117] "RemoveContainer" containerID="c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f" Jan 31 05:02:17 crc kubenswrapper[4832]: E0131 05:02:17.489714 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f\": container with ID starting with c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f not found: ID does not exist" containerID="c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.489744 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f"} err="failed to get container status \"c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f\": rpc error: code = NotFound desc = could not find container \"c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f\": container with ID starting with c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f not found: ID does not exist" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.489768 4832 scope.go:117] "RemoveContainer" containerID="c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.490055 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38"} err="failed to get container status \"c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38\": rpc error: code = NotFound desc = could not find container \"c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38\": container with ID starting with c1025a611dc759a5eb50d7b935cbe66dcba2cb2ceb7d933fb8f6085f44b61a38 not found: ID does not exist" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.490082 4832 scope.go:117] "RemoveContainer" containerID="c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.492187 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f"} err="failed to get container status \"c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f\": rpc error: code = NotFound desc = could not find container \"c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f\": container with ID starting with c3e2d3a1e97727ec65c8f765496b62db67b79caf3c1aec5157cdd95a017d8a4f not found: ID does not exist" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.508056 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24g7r\" (UniqueName: \"kubernetes.io/projected/27f38d5e-d23c-48b9-9f07-855c990b3a78-kube-api-access-24g7r\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.508099 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.508111 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.508122 4832 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27f38d5e-d23c-48b9-9f07-855c990b3a78-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.508136 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27f38d5e-d23c-48b9-9f07-855c990b3a78-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.508161 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.521690 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-config-data" (OuterVolumeSpecName: "config-data") pod "27f38d5e-d23c-48b9-9f07-855c990b3a78" (UID: "27f38d5e-d23c-48b9-9f07-855c990b3a78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.611365 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f38d5e-d23c-48b9-9f07-855c990b3a78-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.792917 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.818244 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.841664 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:02:17 crc kubenswrapper[4832]: E0131 05:02:17.842424 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e522509-0496-4e05-b5d0-935f5ef2fc75" containerName="horizon" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.842451 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e522509-0496-4e05-b5d0-935f5ef2fc75" containerName="horizon" Jan 31 05:02:17 crc kubenswrapper[4832]: E0131 05:02:17.842476 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e522509-0496-4e05-b5d0-935f5ef2fc75" containerName="horizon-log" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.842484 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e522509-0496-4e05-b5d0-935f5ef2fc75" containerName="horizon-log" Jan 31 05:02:17 crc kubenswrapper[4832]: E0131 05:02:17.842496 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f38d5e-d23c-48b9-9f07-855c990b3a78" containerName="cinder-api-log" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.842502 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f38d5e-d23c-48b9-9f07-855c990b3a78" containerName="cinder-api-log" Jan 31 05:02:17 crc kubenswrapper[4832]: E0131 05:02:17.842518 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f38d5e-d23c-48b9-9f07-855c990b3a78" containerName="cinder-api" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.842525 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f38d5e-d23c-48b9-9f07-855c990b3a78" containerName="cinder-api" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.842730 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e522509-0496-4e05-b5d0-935f5ef2fc75" containerName="horizon" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.842745 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f38d5e-d23c-48b9-9f07-855c990b3a78" containerName="cinder-api-log" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.842755 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e522509-0496-4e05-b5d0-935f5ef2fc75" containerName="horizon-log" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.842766 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f38d5e-d23c-48b9-9f07-855c990b3a78" containerName="cinder-api" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.844284 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.849400 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.849902 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.850688 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.882907 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27f38d5e-d23c-48b9-9f07-855c990b3a78" path="/var/lib/kubelet/pods/27f38d5e-d23c-48b9-9f07-855c990b3a78/volumes" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.884275 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e522509-0496-4e05-b5d0-935f5ef2fc75" path="/var/lib/kubelet/pods/9e522509-0496-4e05-b5d0-935f5ef2fc75/volumes" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.884983 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.922046 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.922156 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-config-data-custom\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.922190 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bf54b70-e647-47e2-a8fd-1f15cab614a6-logs\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.922221 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7bf54b70-e647-47e2-a8fd-1f15cab614a6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.922252 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-config-data\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.922313 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22v9n\" (UniqueName: \"kubernetes.io/projected/7bf54b70-e647-47e2-a8fd-1f15cab614a6-kube-api-access-22v9n\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.922375 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-scripts\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.922434 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:17 crc kubenswrapper[4832]: I0131 05:02:17.922463 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.024198 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-scripts\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.024286 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.024306 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.024355 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.024392 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-config-data-custom\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.024625 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bf54b70-e647-47e2-a8fd-1f15cab614a6-logs\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.024659 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7bf54b70-e647-47e2-a8fd-1f15cab614a6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.024685 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-config-data\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.024720 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22v9n\" (UniqueName: \"kubernetes.io/projected/7bf54b70-e647-47e2-a8fd-1f15cab614a6-kube-api-access-22v9n\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.029398 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7bf54b70-e647-47e2-a8fd-1f15cab614a6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.029888 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bf54b70-e647-47e2-a8fd-1f15cab614a6-logs\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.041215 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-config-data\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.045387 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-config-data-custom\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.046242 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.048116 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-scripts\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.053359 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22v9n\" (UniqueName: \"kubernetes.io/projected/7bf54b70-e647-47e2-a8fd-1f15cab614a6-kube-api-access-22v9n\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.055967 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.059772 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bf54b70-e647-47e2-a8fd-1f15cab614a6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7bf54b70-e647-47e2-a8fd-1f15cab614a6\") " pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.176316 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.239705 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.329733 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzjpn\" (UniqueName: \"kubernetes.io/projected/46b0d335-0c75-4996-be63-bd416e988ced-kube-api-access-fzjpn\") pod \"46b0d335-0c75-4996-be63-bd416e988ced\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.329780 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46b0d335-0c75-4996-be63-bd416e988ced-log-httpd\") pod \"46b0d335-0c75-4996-be63-bd416e988ced\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.329849 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-config-data\") pod \"46b0d335-0c75-4996-be63-bd416e988ced\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.329884 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-sg-core-conf-yaml\") pod \"46b0d335-0c75-4996-be63-bd416e988ced\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.329956 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46b0d335-0c75-4996-be63-bd416e988ced-run-httpd\") pod \"46b0d335-0c75-4996-be63-bd416e988ced\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.330013 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-scripts\") pod \"46b0d335-0c75-4996-be63-bd416e988ced\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.330398 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-combined-ca-bundle\") pod \"46b0d335-0c75-4996-be63-bd416e988ced\" (UID: \"46b0d335-0c75-4996-be63-bd416e988ced\") " Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.331578 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46b0d335-0c75-4996-be63-bd416e988ced-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "46b0d335-0c75-4996-be63-bd416e988ced" (UID: "46b0d335-0c75-4996-be63-bd416e988ced"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.332112 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46b0d335-0c75-4996-be63-bd416e988ced-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "46b0d335-0c75-4996-be63-bd416e988ced" (UID: "46b0d335-0c75-4996-be63-bd416e988ced"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.341277 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-scripts" (OuterVolumeSpecName: "scripts") pod "46b0d335-0c75-4996-be63-bd416e988ced" (UID: "46b0d335-0c75-4996-be63-bd416e988ced"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.341378 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b0d335-0c75-4996-be63-bd416e988ced-kube-api-access-fzjpn" (OuterVolumeSpecName: "kube-api-access-fzjpn") pod "46b0d335-0c75-4996-be63-bd416e988ced" (UID: "46b0d335-0c75-4996-be63-bd416e988ced"). InnerVolumeSpecName "kube-api-access-fzjpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.434115 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzjpn\" (UniqueName: \"kubernetes.io/projected/46b0d335-0c75-4996-be63-bd416e988ced-kube-api-access-fzjpn\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.434623 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46b0d335-0c75-4996-be63-bd416e988ced-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.434660 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46b0d335-0c75-4996-be63-bd416e988ced-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.434674 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.436770 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "46b0d335-0c75-4996-be63-bd416e988ced" (UID: "46b0d335-0c75-4996-be63-bd416e988ced"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.444918 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46b0d335-0c75-4996-be63-bd416e988ced" (UID: "46b0d335-0c75-4996-be63-bd416e988ced"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.458029 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-config-data" (OuterVolumeSpecName: "config-data") pod "46b0d335-0c75-4996-be63-bd416e988ced" (UID: "46b0d335-0c75-4996-be63-bd416e988ced"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.477038 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ee665057-9128-472d-ad63-7701cfa381e8","Type":"ContainerStarted","Data":"ffa897fdfd7b3c93c46fa5b403d3baba9d13235cf524211b06261da73d831860"} Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.488061 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7c54d8bf-w9s7x" event={"ID":"9192a7c5-49bb-4fed-858e-0c14b96f1288","Type":"ContainerStarted","Data":"216f961b94067276af82051c0a56de3a7ccff690afcab3e461af5168654f0a98"} Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.488153 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7c54d8bf-w9s7x" event={"ID":"9192a7c5-49bb-4fed-858e-0c14b96f1288","Type":"ContainerStarted","Data":"ced62e07e7ac5e3e2777b259ba2ba2298ac7e931b864eca02a8ee3b3861ff4a2"} Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.488305 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.496745 4832 generic.go:334] "Generic (PLEG): container finished" podID="46b0d335-0c75-4996-be63-bd416e988ced" containerID="346c6915e52f127ba3d2807fea4a516016f6e64ba1c3cdded6c8370c6acfc764" exitCode=0 Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.496822 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46b0d335-0c75-4996-be63-bd416e988ced","Type":"ContainerDied","Data":"346c6915e52f127ba3d2807fea4a516016f6e64ba1c3cdded6c8370c6acfc764"} Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.496855 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46b0d335-0c75-4996-be63-bd416e988ced","Type":"ContainerDied","Data":"703f8a0ea9d4e5d0b4863b05f9a7f34d9998e6a51903236a4c40dbc3160f3e94"} Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.496873 4832 scope.go:117] "RemoveContainer" containerID="69cae75e18439a11f316401ca8e3e3a04cfca9ea718fc95cadd71906b69053e5" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.497007 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.517731 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.537422716 podStartE2EDuration="7.517698611s" podCreationTimestamp="2026-01-31 05:02:11 +0000 UTC" firstStartedPulling="2026-01-31 05:02:13.14844376 +0000 UTC m=+1142.097265445" lastFinishedPulling="2026-01-31 05:02:14.128719655 +0000 UTC m=+1143.077541340" observedRunningTime="2026-01-31 05:02:18.497627197 +0000 UTC m=+1147.446448902" watchObservedRunningTime="2026-01-31 05:02:18.517698611 +0000 UTC m=+1147.466520316" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.521876 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" event={"ID":"6df8e9f4-654c-449b-b5ce-2fb826d6449c","Type":"ContainerStarted","Data":"3602051dee2666139de5bafbe1eac5b28917bc5f71c44eb1b4d66a6568beaabe"} Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.557544 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c7c54d8bf-w9s7x" podStartSLOduration=4.556976172 podStartE2EDuration="4.556976172s" podCreationTimestamp="2026-01-31 05:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:02:18.547497308 +0000 UTC m=+1147.496319013" watchObservedRunningTime="2026-01-31 05:02:18.556976172 +0000 UTC m=+1147.505797857" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.557877 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.570021 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.571688 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46b0d335-0c75-4996-be63-bd416e988ced-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.549215 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.571795 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.571885 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.573839 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f27820a852e7aad47dd943e170fd10884dd63d7a7b2bc83ff12b5f3f39f5de0"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.573980 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://5f27820a852e7aad47dd943e170fd10884dd63d7a7b2bc83ff12b5f3f39f5de0" gracePeriod=600 Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.577404 4832 scope.go:117] "RemoveContainer" containerID="83255fcc723cb51bd88d4f3864c113117b61861f708cfc8fb92e75bd63476553" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.609011 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-c4c47d5bb-m6mqf" podStartSLOduration=11.026654557 podStartE2EDuration="13.6089889s" podCreationTimestamp="2026-01-31 05:02:05 +0000 UTC" firstStartedPulling="2026-01-31 05:02:13.421520263 +0000 UTC m=+1142.370341938" lastFinishedPulling="2026-01-31 05:02:16.003854596 +0000 UTC m=+1144.952676281" observedRunningTime="2026-01-31 05:02:18.594295953 +0000 UTC m=+1147.543117638" watchObservedRunningTime="2026-01-31 05:02:18.6089889 +0000 UTC m=+1147.557810585" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.686873 4832 scope.go:117] "RemoveContainer" containerID="346c6915e52f127ba3d2807fea4a516016f6e64ba1c3cdded6c8370c6acfc764" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.729954 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.741867 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.756984 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:18 crc kubenswrapper[4832]: E0131 05:02:18.757429 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b0d335-0c75-4996-be63-bd416e988ced" containerName="sg-core" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.757446 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b0d335-0c75-4996-be63-bd416e988ced" containerName="sg-core" Jan 31 05:02:18 crc kubenswrapper[4832]: E0131 05:02:18.757459 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b0d335-0c75-4996-be63-bd416e988ced" containerName="proxy-httpd" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.757465 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b0d335-0c75-4996-be63-bd416e988ced" containerName="proxy-httpd" Jan 31 05:02:18 crc kubenswrapper[4832]: E0131 05:02:18.757490 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b0d335-0c75-4996-be63-bd416e988ced" containerName="ceilometer-notification-agent" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.757498 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b0d335-0c75-4996-be63-bd416e988ced" containerName="ceilometer-notification-agent" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.757735 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b0d335-0c75-4996-be63-bd416e988ced" containerName="proxy-httpd" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.757751 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b0d335-0c75-4996-be63-bd416e988ced" containerName="ceilometer-notification-agent" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.757775 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b0d335-0c75-4996-be63-bd416e988ced" containerName="sg-core" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.759848 4832 scope.go:117] "RemoveContainer" containerID="69cae75e18439a11f316401ca8e3e3a04cfca9ea718fc95cadd71906b69053e5" Jan 31 05:02:18 crc kubenswrapper[4832]: E0131 05:02:18.760526 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69cae75e18439a11f316401ca8e3e3a04cfca9ea718fc95cadd71906b69053e5\": container with ID starting with 69cae75e18439a11f316401ca8e3e3a04cfca9ea718fc95cadd71906b69053e5 not found: ID does not exist" containerID="69cae75e18439a11f316401ca8e3e3a04cfca9ea718fc95cadd71906b69053e5" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.760591 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69cae75e18439a11f316401ca8e3e3a04cfca9ea718fc95cadd71906b69053e5"} err="failed to get container status \"69cae75e18439a11f316401ca8e3e3a04cfca9ea718fc95cadd71906b69053e5\": rpc error: code = NotFound desc = could not find container \"69cae75e18439a11f316401ca8e3e3a04cfca9ea718fc95cadd71906b69053e5\": container with ID starting with 69cae75e18439a11f316401ca8e3e3a04cfca9ea718fc95cadd71906b69053e5 not found: ID does not exist" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.760623 4832 scope.go:117] "RemoveContainer" containerID="83255fcc723cb51bd88d4f3864c113117b61861f708cfc8fb92e75bd63476553" Jan 31 05:02:18 crc kubenswrapper[4832]: E0131 05:02:18.760990 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83255fcc723cb51bd88d4f3864c113117b61861f708cfc8fb92e75bd63476553\": container with ID starting with 83255fcc723cb51bd88d4f3864c113117b61861f708cfc8fb92e75bd63476553 not found: ID does not exist" containerID="83255fcc723cb51bd88d4f3864c113117b61861f708cfc8fb92e75bd63476553" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.761055 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83255fcc723cb51bd88d4f3864c113117b61861f708cfc8fb92e75bd63476553"} err="failed to get container status \"83255fcc723cb51bd88d4f3864c113117b61861f708cfc8fb92e75bd63476553\": rpc error: code = NotFound desc = could not find container \"83255fcc723cb51bd88d4f3864c113117b61861f708cfc8fb92e75bd63476553\": container with ID starting with 83255fcc723cb51bd88d4f3864c113117b61861f708cfc8fb92e75bd63476553 not found: ID does not exist" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.761106 4832 scope.go:117] "RemoveContainer" containerID="346c6915e52f127ba3d2807fea4a516016f6e64ba1c3cdded6c8370c6acfc764" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.761710 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: E0131 05:02:18.761893 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"346c6915e52f127ba3d2807fea4a516016f6e64ba1c3cdded6c8370c6acfc764\": container with ID starting with 346c6915e52f127ba3d2807fea4a516016f6e64ba1c3cdded6c8370c6acfc764 not found: ID does not exist" containerID="346c6915e52f127ba3d2807fea4a516016f6e64ba1c3cdded6c8370c6acfc764" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.761922 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"346c6915e52f127ba3d2807fea4a516016f6e64ba1c3cdded6c8370c6acfc764"} err="failed to get container status \"346c6915e52f127ba3d2807fea4a516016f6e64ba1c3cdded6c8370c6acfc764\": rpc error: code = NotFound desc = could not find container \"346c6915e52f127ba3d2807fea4a516016f6e64ba1c3cdded6c8370c6acfc764\": container with ID starting with 346c6915e52f127ba3d2807fea4a516016f6e64ba1c3cdded6c8370c6acfc764 not found: ID does not exist" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.766423 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.767921 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.792295 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.809061 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.894381 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-scripts\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.894481 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9wjl\" (UniqueName: \"kubernetes.io/projected/ef4aa6cc-0613-4424-bfe8-4ff749661c27-kube-api-access-t9wjl\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.894504 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef4aa6cc-0613-4424-bfe8-4ff749661c27-run-httpd\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.894550 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.894662 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-config-data\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.894701 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef4aa6cc-0613-4424-bfe8-4ff749661c27-log-httpd\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.894728 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.996500 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9wjl\" (UniqueName: \"kubernetes.io/projected/ef4aa6cc-0613-4424-bfe8-4ff749661c27-kube-api-access-t9wjl\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.997146 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef4aa6cc-0613-4424-bfe8-4ff749661c27-run-httpd\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.997228 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.997251 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-config-data\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.997338 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef4aa6cc-0613-4424-bfe8-4ff749661c27-log-httpd\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.997376 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.997430 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-scripts\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:18 crc kubenswrapper[4832]: I0131 05:02:18.998150 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef4aa6cc-0613-4424-bfe8-4ff749661c27-run-httpd\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:18.998710 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef4aa6cc-0613-4424-bfe8-4ff749661c27-log-httpd\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:19.015292 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:19.016266 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9wjl\" (UniqueName: \"kubernetes.io/projected/ef4aa6cc-0613-4424-bfe8-4ff749661c27-kube-api-access-t9wjl\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:19.016408 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-scripts\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:19.017357 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-config-data\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:19.024379 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " pod="openstack/ceilometer-0" Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:19.080941 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:19.111207 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:19.209814 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:19.541226 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7bf54b70-e647-47e2-a8fd-1f15cab614a6","Type":"ContainerStarted","Data":"98b15c30a1fd05022ae446caef96e12cd9ba726316210133c417b647ebc02905"} Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:19.545825 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="5f27820a852e7aad47dd943e170fd10884dd63d7a7b2bc83ff12b5f3f39f5de0" exitCode=0 Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:19.545878 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"5f27820a852e7aad47dd943e170fd10884dd63d7a7b2bc83ff12b5f3f39f5de0"} Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:19.545962 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"544cd44b27a0e014964ac1b0ff9bce944334e127b15e2971994bb94547d5341e"} Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:19.545987 4832 scope.go:117] "RemoveContainer" containerID="54222fe11bae7b5928dfc35b129ce940cf361d675f70d08f1d3420ed1cc0952b" Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:19.743744 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:19 crc kubenswrapper[4832]: I0131 05:02:19.877521 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46b0d335-0c75-4996-be63-bd416e988ced" path="/var/lib/kubelet/pods/46b0d335-0c75-4996-be63-bd416e988ced/volumes" Jan 31 05:02:20 crc kubenswrapper[4832]: I0131 05:02:20.566650 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef4aa6cc-0613-4424-bfe8-4ff749661c27","Type":"ContainerStarted","Data":"1569399ec5f81f4d766407a4d8af1993797db27f287e47e72bb35931be77138e"} Jan 31 05:02:20 crc kubenswrapper[4832]: I0131 05:02:20.570578 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7bf54b70-e647-47e2-a8fd-1f15cab614a6","Type":"ContainerStarted","Data":"179837b3ff92f032a463553bae812b0e79d902176a39da5c6a8ab118b001c29e"} Jan 31 05:02:21 crc kubenswrapper[4832]: I0131 05:02:21.241102 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7459856588-428fk" Jan 31 05:02:21 crc kubenswrapper[4832]: I0131 05:02:21.500848 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 05:02:21 crc kubenswrapper[4832]: I0131 05:02:21.634799 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:02:21 crc kubenswrapper[4832]: I0131 05:02:21.652912 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7bf54b70-e647-47e2-a8fd-1f15cab614a6","Type":"ContainerStarted","Data":"e2d61691653781ae168d0a4114ce4dc0b6fba38637ee3b5df334b10f57273e74"} Jan 31 05:02:21 crc kubenswrapper[4832]: I0131 05:02:21.654445 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 05:02:21 crc kubenswrapper[4832]: I0131 05:02:21.681877 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef4aa6cc-0613-4424-bfe8-4ff749661c27","Type":"ContainerStarted","Data":"36c56f17cad3e341c219f1f4020fb202ca3a55baafc71069fba93db9cdc4ddd0"} Jan 31 05:02:21 crc kubenswrapper[4832]: I0131 05:02:21.714961 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.714937257 podStartE2EDuration="4.714937257s" podCreationTimestamp="2026-01-31 05:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:02:21.698251258 +0000 UTC m=+1150.647072963" watchObservedRunningTime="2026-01-31 05:02:21.714937257 +0000 UTC m=+1150.663758942" Jan 31 05:02:21 crc kubenswrapper[4832]: I0131 05:02:21.732927 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-j28pw"] Jan 31 05:02:21 crc kubenswrapper[4832]: I0131 05:02:21.733504 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-j28pw" podUID="cd0f7236-580a-416e-b24b-109b3eff6ac0" containerName="dnsmasq-dns" containerID="cri-o://d9cc72cbd5d2fa881db72d46e3877e674d1448590716a74b614ab6473fe4f70e" gracePeriod=10 Jan 31 05:02:21 crc kubenswrapper[4832]: I0131 05:02:21.745716 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6f6b9f547b-mrjcq" Jan 31 05:02:21 crc kubenswrapper[4832]: I0131 05:02:21.847848 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7fd59dbb48-vjkkx"] Jan 31 05:02:21 crc kubenswrapper[4832]: I0131 05:02:21.848308 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7fd59dbb48-vjkkx" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerName="horizon-log" containerID="cri-o://4094aadeea02b4a16438623d6afa0de6059eba2de4f411c535f3de2e34dff690" gracePeriod=30 Jan 31 05:02:21 crc kubenswrapper[4832]: I0131 05:02:21.848752 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7fd59dbb48-vjkkx" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerName="horizon" containerID="cri-o://0cd477ddc7e9602ba61a00d26536bd6addd4e75804a0663968e33e48626e5a28" gracePeriod=30 Jan 31 05:02:21 crc kubenswrapper[4832]: I0131 05:02:21.863942 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7fd59dbb48-vjkkx" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.136367 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7459856588-428fk" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.484273 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-548576cf8d-gz7f7"] Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.488350 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.507089 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-548576cf8d-gz7f7"] Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.629428 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-combined-ca-bundle\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.629527 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-scripts\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.629587 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-config-data\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.629606 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-internal-tls-certs\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.629649 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-logs\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.629688 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whz5c\" (UniqueName: \"kubernetes.io/projected/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-kube-api-access-whz5c\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.629715 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-public-tls-certs\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.639086 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.653094 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.731380 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-combined-ca-bundle\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.731450 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-scripts\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.731498 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-config-data\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.731522 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-internal-tls-certs\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.731579 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-logs\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.731615 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whz5c\" (UniqueName: \"kubernetes.io/projected/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-kube-api-access-whz5c\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.731642 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-public-tls-certs\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.735461 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-logs\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.752275 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-config-data\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.752767 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-internal-tls-certs\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.753158 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-public-tls-certs\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.753686 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-combined-ca-bundle\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.754104 4832 generic.go:334] "Generic (PLEG): container finished" podID="cd0f7236-580a-416e-b24b-109b3eff6ac0" containerID="d9cc72cbd5d2fa881db72d46e3877e674d1448590716a74b614ab6473fe4f70e" exitCode=0 Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.754234 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-j28pw" event={"ID":"cd0f7236-580a-416e-b24b-109b3eff6ac0","Type":"ContainerDied","Data":"d9cc72cbd5d2fa881db72d46e3877e674d1448590716a74b614ab6473fe4f70e"} Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.761495 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.766022 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.780658 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whz5c\" (UniqueName: \"kubernetes.io/projected/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-kube-api-access-whz5c\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.813186 4832 generic.go:334] "Generic (PLEG): container finished" podID="c7ba646e-a7ee-4e60-8396-5a9eb0590378" containerID="334190742e1382c51d3e3a093dc83cc88edc3155581a81e4ffe60739732f9b6a" exitCode=0 Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.814077 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ee665057-9128-472d-ad63-7701cfa381e8" containerName="cinder-scheduler" containerID="cri-o://f8c8c4b681c47f1cda4fa155c98d5c7959ad258128a5e5c677b76ee1910146f1" gracePeriod=30 Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.814287 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84fb4958f7-pczxt" event={"ID":"c7ba646e-a7ee-4e60-8396-5a9eb0590378","Type":"ContainerDied","Data":"334190742e1382c51d3e3a093dc83cc88edc3155581a81e4ffe60739732f9b6a"} Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.816654 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ee665057-9128-472d-ad63-7701cfa381e8" containerName="probe" containerID="cri-o://ffa897fdfd7b3c93c46fa5b403d3baba9d13235cf524211b06261da73d831860" gracePeriod=30 Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.842594 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bfb9b89-7b02-4f5a-b967-d84ad8e20325-scripts\") pod \"placement-548576cf8d-gz7f7\" (UID: \"0bfb9b89-7b02-4f5a-b967-d84ad8e20325\") " pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.942216 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-ovsdbserver-sb\") pod \"cd0f7236-580a-416e-b24b-109b3eff6ac0\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.942698 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-dns-swift-storage-0\") pod \"cd0f7236-580a-416e-b24b-109b3eff6ac0\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.942857 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-config\") pod \"cd0f7236-580a-416e-b24b-109b3eff6ac0\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.942899 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvp5b\" (UniqueName: \"kubernetes.io/projected/cd0f7236-580a-416e-b24b-109b3eff6ac0-kube-api-access-qvp5b\") pod \"cd0f7236-580a-416e-b24b-109b3eff6ac0\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.942939 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-ovsdbserver-nb\") pod \"cd0f7236-580a-416e-b24b-109b3eff6ac0\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " Jan 31 05:02:22 crc kubenswrapper[4832]: I0131 05:02:22.943009 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-dns-svc\") pod \"cd0f7236-580a-416e-b24b-109b3eff6ac0\" (UID: \"cd0f7236-580a-416e-b24b-109b3eff6ac0\") " Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:22.996397 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0f7236-580a-416e-b24b-109b3eff6ac0-kube-api-access-qvp5b" (OuterVolumeSpecName: "kube-api-access-qvp5b") pod "cd0f7236-580a-416e-b24b-109b3eff6ac0" (UID: "cd0f7236-580a-416e-b24b-109b3eff6ac0"). InnerVolumeSpecName "kube-api-access-qvp5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.047309 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvp5b\" (UniqueName: \"kubernetes.io/projected/cd0f7236-580a-416e-b24b-109b3eff6ac0-kube-api-access-qvp5b\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.060366 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-config" (OuterVolumeSpecName: "config") pod "cd0f7236-580a-416e-b24b-109b3eff6ac0" (UID: "cd0f7236-580a-416e-b24b-109b3eff6ac0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.128867 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.153258 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cd0f7236-580a-416e-b24b-109b3eff6ac0" (UID: "cd0f7236-580a-416e-b24b-109b3eff6ac0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.156290 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.156330 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.168139 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cd0f7236-580a-416e-b24b-109b3eff6ac0" (UID: "cd0f7236-580a-416e-b24b-109b3eff6ac0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.185714 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd0f7236-580a-416e-b24b-109b3eff6ac0" (UID: "cd0f7236-580a-416e-b24b-109b3eff6ac0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.214221 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd0f7236-580a-416e-b24b-109b3eff6ac0" (UID: "cd0f7236-580a-416e-b24b-109b3eff6ac0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.260663 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.261196 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.261206 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd0f7236-580a-416e-b24b-109b3eff6ac0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.489428 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.567788 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-combined-ca-bundle\") pod \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.568346 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-internal-tls-certs\") pod \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.568411 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-ovndb-tls-certs\") pod \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.568434 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-config\") pod \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.568519 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwcr\" (UniqueName: \"kubernetes.io/projected/c7ba646e-a7ee-4e60-8396-5a9eb0590378-kube-api-access-kfwcr\") pod \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.568554 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-public-tls-certs\") pod \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.568604 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-httpd-config\") pod \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\" (UID: \"c7ba646e-a7ee-4e60-8396-5a9eb0590378\") " Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.580819 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7ba646e-a7ee-4e60-8396-5a9eb0590378-kube-api-access-kfwcr" (OuterVolumeSpecName: "kube-api-access-kfwcr") pod "c7ba646e-a7ee-4e60-8396-5a9eb0590378" (UID: "c7ba646e-a7ee-4e60-8396-5a9eb0590378"). InnerVolumeSpecName "kube-api-access-kfwcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.587816 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c7ba646e-a7ee-4e60-8396-5a9eb0590378" (UID: "c7ba646e-a7ee-4e60-8396-5a9eb0590378"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.677138 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwcr\" (UniqueName: \"kubernetes.io/projected/c7ba646e-a7ee-4e60-8396-5a9eb0590378-kube-api-access-kfwcr\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.677204 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.742082 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-config" (OuterVolumeSpecName: "config") pod "c7ba646e-a7ee-4e60-8396-5a9eb0590378" (UID: "c7ba646e-a7ee-4e60-8396-5a9eb0590378"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.762856 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7ba646e-a7ee-4e60-8396-5a9eb0590378" (UID: "c7ba646e-a7ee-4e60-8396-5a9eb0590378"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.779607 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c7ba646e-a7ee-4e60-8396-5a9eb0590378" (UID: "c7ba646e-a7ee-4e60-8396-5a9eb0590378"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.780118 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.780162 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.780176 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.790305 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c7ba646e-a7ee-4e60-8396-5a9eb0590378" (UID: "c7ba646e-a7ee-4e60-8396-5a9eb0590378"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.830425 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c7ba646e-a7ee-4e60-8396-5a9eb0590378" (UID: "c7ba646e-a7ee-4e60-8396-5a9eb0590378"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.852188 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef4aa6cc-0613-4424-bfe8-4ff749661c27","Type":"ContainerStarted","Data":"2599f104f4ab1d160c4382aa5f79a8263dc6e9baff66f5ae5d89fd9c6eda8140"} Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.882114 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.882164 4832 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ba646e-a7ee-4e60-8396-5a9eb0590378-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.885975 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84fb4958f7-pczxt" event={"ID":"c7ba646e-a7ee-4e60-8396-5a9eb0590378","Type":"ContainerDied","Data":"1d3157bc2913527613e8355614024898bcf0037fc69d7f70e49bb02cc0ac67e7"} Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.886041 4832 scope.go:117] "RemoveContainer" containerID="12ac270587f81dc1ae2cb82016d79dc5ae88f9b329d9cb2f837f4b4f3f886a56" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.886036 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84fb4958f7-pczxt" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.911689 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-j28pw" event={"ID":"cd0f7236-580a-416e-b24b-109b3eff6ac0","Type":"ContainerDied","Data":"25e5af0ed5711a554933daba0d7a4cd8526fa3cadad1604b97b07f7bc363f91f"} Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.911870 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-j28pw" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.925092 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-548576cf8d-gz7f7"] Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.951090 4832 scope.go:117] "RemoveContainer" containerID="334190742e1382c51d3e3a093dc83cc88edc3155581a81e4ffe60739732f9b6a" Jan 31 05:02:23 crc kubenswrapper[4832]: I0131 05:02:23.971652 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84fb4958f7-pczxt"] Jan 31 05:02:24 crc kubenswrapper[4832]: I0131 05:02:24.000777 4832 scope.go:117] "RemoveContainer" containerID="d9cc72cbd5d2fa881db72d46e3877e674d1448590716a74b614ab6473fe4f70e" Jan 31 05:02:24 crc kubenswrapper[4832]: I0131 05:02:24.000922 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-84fb4958f7-pczxt"] Jan 31 05:02:24 crc kubenswrapper[4832]: I0131 05:02:24.007535 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-j28pw"] Jan 31 05:02:24 crc kubenswrapper[4832]: I0131 05:02:24.015147 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-j28pw"] Jan 31 05:02:24 crc kubenswrapper[4832]: I0131 05:02:24.037840 4832 scope.go:117] "RemoveContainer" containerID="d643038de9befe52f280166c184b89011f274b4595f403c9e73826c0937aeb1b" Jan 31 05:02:24 crc kubenswrapper[4832]: I0131 05:02:24.458224 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:24 crc kubenswrapper[4832]: I0131 05:02:24.837910 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:24 crc kubenswrapper[4832]: I0131 05:02:24.943720 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548576cf8d-gz7f7" event={"ID":"0bfb9b89-7b02-4f5a-b967-d84ad8e20325","Type":"ContainerStarted","Data":"55a201805512cda18301324da8a23fac1a2a6ed64a7798a764d0ef0dfa943d97"} Jan 31 05:02:24 crc kubenswrapper[4832]: I0131 05:02:24.943766 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548576cf8d-gz7f7" event={"ID":"0bfb9b89-7b02-4f5a-b967-d84ad8e20325","Type":"ContainerStarted","Data":"64e105a95df256330ac6b733e0a9a3ecbf0f7aa8e7357ef1adb8ad02f9c34416"} Jan 31 05:02:24 crc kubenswrapper[4832]: I0131 05:02:24.943777 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-548576cf8d-gz7f7" event={"ID":"0bfb9b89-7b02-4f5a-b967-d84ad8e20325","Type":"ContainerStarted","Data":"b70d04754355481ad3f84d06f902212b915d3c7e8a611ce4c7d4e84b699124dd"} Jan 31 05:02:24 crc kubenswrapper[4832]: I0131 05:02:24.944242 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:24 crc kubenswrapper[4832]: I0131 05:02:24.944323 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:24 crc kubenswrapper[4832]: I0131 05:02:24.986318 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef4aa6cc-0613-4424-bfe8-4ff749661c27","Type":"ContainerStarted","Data":"42f45a79a158dfa648f602e7c7f474d6bf6ca6d88c97225468cd07d29e85ec1c"} Jan 31 05:02:24 crc kubenswrapper[4832]: I0131 05:02:24.990077 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-548576cf8d-gz7f7" podStartSLOduration=2.990060076 podStartE2EDuration="2.990060076s" podCreationTimestamp="2026-01-31 05:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:02:24.98667483 +0000 UTC m=+1153.935496525" watchObservedRunningTime="2026-01-31 05:02:24.990060076 +0000 UTC m=+1153.938881761" Jan 31 05:02:25 crc kubenswrapper[4832]: I0131 05:02:25.268932 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-797bd69d58-5ff8g" Jan 31 05:02:25 crc kubenswrapper[4832]: I0131 05:02:25.388155 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d775c5b5d-5df9r"] Jan 31 05:02:25 crc kubenswrapper[4832]: I0131 05:02:25.871267 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7ba646e-a7ee-4e60-8396-5a9eb0590378" path="/var/lib/kubelet/pods/c7ba646e-a7ee-4e60-8396-5a9eb0590378/volumes" Jan 31 05:02:25 crc kubenswrapper[4832]: I0131 05:02:25.871961 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd0f7236-580a-416e-b24b-109b3eff6ac0" path="/var/lib/kubelet/pods/cd0f7236-580a-416e-b24b-109b3eff6ac0/volumes" Jan 31 05:02:26 crc kubenswrapper[4832]: I0131 05:02:26.009316 4832 generic.go:334] "Generic (PLEG): container finished" podID="ee665057-9128-472d-ad63-7701cfa381e8" containerID="ffa897fdfd7b3c93c46fa5b403d3baba9d13235cf524211b06261da73d831860" exitCode=0 Jan 31 05:02:26 crc kubenswrapper[4832]: I0131 05:02:26.009361 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ee665057-9128-472d-ad63-7701cfa381e8","Type":"ContainerDied","Data":"ffa897fdfd7b3c93c46fa5b403d3baba9d13235cf524211b06261da73d831860"} Jan 31 05:02:26 crc kubenswrapper[4832]: I0131 05:02:26.009666 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d775c5b5d-5df9r" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerName="barbican-api-log" containerID="cri-o://53b347cfd4d74f353c7aa1ec27042440313f790bec4c35668d351fd6187862ed" gracePeriod=30 Jan 31 05:02:26 crc kubenswrapper[4832]: I0131 05:02:26.009859 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d775c5b5d-5df9r" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerName="barbican-api" containerID="cri-o://a3f0098f01cd9efd61d11f50e7f0d883ca02df7cb3f7e9eff6bc7a6171800fd4" gracePeriod=30 Jan 31 05:02:26 crc kubenswrapper[4832]: I0131 05:02:26.014750 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-d775c5b5d-5df9r" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": EOF" Jan 31 05:02:26 crc kubenswrapper[4832]: I0131 05:02:26.014917 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d775c5b5d-5df9r" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": EOF" Jan 31 05:02:26 crc kubenswrapper[4832]: I0131 05:02:26.017895 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-d775c5b5d-5df9r" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": EOF" Jan 31 05:02:26 crc kubenswrapper[4832]: I0131 05:02:26.018259 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d775c5b5d-5df9r" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": EOF" Jan 31 05:02:26 crc kubenswrapper[4832]: I0131 05:02:26.476452 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7fd59dbb48-vjkkx" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:57338->10.217.0.150:8443: read: connection reset by peer" Jan 31 05:02:26 crc kubenswrapper[4832]: I0131 05:02:26.477132 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7fd59dbb48-vjkkx" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.034339 4832 generic.go:334] "Generic (PLEG): container finished" podID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerID="0cd477ddc7e9602ba61a00d26536bd6addd4e75804a0663968e33e48626e5a28" exitCode=0 Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.034406 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fd59dbb48-vjkkx" event={"ID":"02f959e1-19ff-4f88-927b-ef2d3ee6d87e","Type":"ContainerDied","Data":"0cd477ddc7e9602ba61a00d26536bd6addd4e75804a0663968e33e48626e5a28"} Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.036839 4832 generic.go:334] "Generic (PLEG): container finished" podID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerID="53b347cfd4d74f353c7aa1ec27042440313f790bec4c35668d351fd6187862ed" exitCode=143 Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.036893 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d775c5b5d-5df9r" event={"ID":"6f0e1524-0d91-4078-93f4-9c80f630d106","Type":"ContainerDied","Data":"53b347cfd4d74f353c7aa1ec27042440313f790bec4c35668d351fd6187862ed"} Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.043818 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef4aa6cc-0613-4424-bfe8-4ff749661c27","Type":"ContainerStarted","Data":"34b5c03e8b68de0991b7934d7d9049a34bd94b3fb751635efd383d98664264e4"} Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.045544 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.071874 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.891013315 podStartE2EDuration="9.071854674s" podCreationTimestamp="2026-01-31 05:02:18 +0000 UTC" firstStartedPulling="2026-01-31 05:02:19.761407427 +0000 UTC m=+1148.710229102" lastFinishedPulling="2026-01-31 05:02:25.942248786 +0000 UTC m=+1154.891070461" observedRunningTime="2026-01-31 05:02:27.068537611 +0000 UTC m=+1156.017359296" watchObservedRunningTime="2026-01-31 05:02:27.071854674 +0000 UTC m=+1156.020676359" Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.806928 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.888543 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-config-data\") pod \"ee665057-9128-472d-ad63-7701cfa381e8\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.888616 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-config-data-custom\") pod \"ee665057-9128-472d-ad63-7701cfa381e8\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.888665 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee665057-9128-472d-ad63-7701cfa381e8-etc-machine-id\") pod \"ee665057-9128-472d-ad63-7701cfa381e8\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.888711 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-combined-ca-bundle\") pod \"ee665057-9128-472d-ad63-7701cfa381e8\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.888789 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzn6q\" (UniqueName: \"kubernetes.io/projected/ee665057-9128-472d-ad63-7701cfa381e8-kube-api-access-fzn6q\") pod \"ee665057-9128-472d-ad63-7701cfa381e8\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.888808 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-scripts\") pod \"ee665057-9128-472d-ad63-7701cfa381e8\" (UID: \"ee665057-9128-472d-ad63-7701cfa381e8\") " Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.890083 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee665057-9128-472d-ad63-7701cfa381e8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ee665057-9128-472d-ad63-7701cfa381e8" (UID: "ee665057-9128-472d-ad63-7701cfa381e8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.899056 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ee665057-9128-472d-ad63-7701cfa381e8" (UID: "ee665057-9128-472d-ad63-7701cfa381e8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.900795 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-scripts" (OuterVolumeSpecName: "scripts") pod "ee665057-9128-472d-ad63-7701cfa381e8" (UID: "ee665057-9128-472d-ad63-7701cfa381e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.953830 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee665057-9128-472d-ad63-7701cfa381e8-kube-api-access-fzn6q" (OuterVolumeSpecName: "kube-api-access-fzn6q") pod "ee665057-9128-472d-ad63-7701cfa381e8" (UID: "ee665057-9128-472d-ad63-7701cfa381e8"). InnerVolumeSpecName "kube-api-access-fzn6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.993037 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzn6q\" (UniqueName: \"kubernetes.io/projected/ee665057-9128-472d-ad63-7701cfa381e8-kube-api-access-fzn6q\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.993310 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.993319 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:27 crc kubenswrapper[4832]: I0131 05:02:27.993328 4832 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee665057-9128-472d-ad63-7701cfa381e8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.023415 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee665057-9128-472d-ad63-7701cfa381e8" (UID: "ee665057-9128-472d-ad63-7701cfa381e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.059897 4832 generic.go:334] "Generic (PLEG): container finished" podID="ee665057-9128-472d-ad63-7701cfa381e8" containerID="f8c8c4b681c47f1cda4fa155c98d5c7959ad258128a5e5c677b76ee1910146f1" exitCode=0 Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.061433 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.061440 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ee665057-9128-472d-ad63-7701cfa381e8","Type":"ContainerDied","Data":"f8c8c4b681c47f1cda4fa155c98d5c7959ad258128a5e5c677b76ee1910146f1"} Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.061748 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-config-data" (OuterVolumeSpecName: "config-data") pod "ee665057-9128-472d-ad63-7701cfa381e8" (UID: "ee665057-9128-472d-ad63-7701cfa381e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.061762 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ee665057-9128-472d-ad63-7701cfa381e8","Type":"ContainerDied","Data":"7ee237afb67c4ac84f2994196ae2d20a859a733f39e2fcf79f1b2c7cdadc2e71"} Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.061792 4832 scope.go:117] "RemoveContainer" containerID="ffa897fdfd7b3c93c46fa5b403d3baba9d13235cf524211b06261da73d831860" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.099526 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.099576 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee665057-9128-472d-ad63-7701cfa381e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.099709 4832 scope.go:117] "RemoveContainer" containerID="f8c8c4b681c47f1cda4fa155c98d5c7959ad258128a5e5c677b76ee1910146f1" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.099838 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.106521 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.139882 4832 scope.go:117] "RemoveContainer" containerID="ffa897fdfd7b3c93c46fa5b403d3baba9d13235cf524211b06261da73d831860" Jan 31 05:02:28 crc kubenswrapper[4832]: E0131 05:02:28.140702 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffa897fdfd7b3c93c46fa5b403d3baba9d13235cf524211b06261da73d831860\": container with ID starting with ffa897fdfd7b3c93c46fa5b403d3baba9d13235cf524211b06261da73d831860 not found: ID does not exist" containerID="ffa897fdfd7b3c93c46fa5b403d3baba9d13235cf524211b06261da73d831860" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.140758 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa897fdfd7b3c93c46fa5b403d3baba9d13235cf524211b06261da73d831860"} err="failed to get container status \"ffa897fdfd7b3c93c46fa5b403d3baba9d13235cf524211b06261da73d831860\": rpc error: code = NotFound desc = could not find container \"ffa897fdfd7b3c93c46fa5b403d3baba9d13235cf524211b06261da73d831860\": container with ID starting with ffa897fdfd7b3c93c46fa5b403d3baba9d13235cf524211b06261da73d831860 not found: ID does not exist" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.140790 4832 scope.go:117] "RemoveContainer" containerID="f8c8c4b681c47f1cda4fa155c98d5c7959ad258128a5e5c677b76ee1910146f1" Jan 31 05:02:28 crc kubenswrapper[4832]: E0131 05:02:28.142869 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8c8c4b681c47f1cda4fa155c98d5c7959ad258128a5e5c677b76ee1910146f1\": container with ID starting with f8c8c4b681c47f1cda4fa155c98d5c7959ad258128a5e5c677b76ee1910146f1 not found: ID does not exist" containerID="f8c8c4b681c47f1cda4fa155c98d5c7959ad258128a5e5c677b76ee1910146f1" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.142957 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8c8c4b681c47f1cda4fa155c98d5c7959ad258128a5e5c677b76ee1910146f1"} err="failed to get container status \"f8c8c4b681c47f1cda4fa155c98d5c7959ad258128a5e5c677b76ee1910146f1\": rpc error: code = NotFound desc = could not find container \"f8c8c4b681c47f1cda4fa155c98d5c7959ad258128a5e5c677b76ee1910146f1\": container with ID starting with f8c8c4b681c47f1cda4fa155c98d5c7959ad258128a5e5c677b76ee1910146f1 not found: ID does not exist" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.149765 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:02:28 crc kubenswrapper[4832]: E0131 05:02:28.151979 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7ba646e-a7ee-4e60-8396-5a9eb0590378" containerName="neutron-httpd" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.152003 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7ba646e-a7ee-4e60-8396-5a9eb0590378" containerName="neutron-httpd" Jan 31 05:02:28 crc kubenswrapper[4832]: E0131 05:02:28.152040 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0f7236-580a-416e-b24b-109b3eff6ac0" containerName="init" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.152048 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0f7236-580a-416e-b24b-109b3eff6ac0" containerName="init" Jan 31 05:02:28 crc kubenswrapper[4832]: E0131 05:02:28.152061 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7ba646e-a7ee-4e60-8396-5a9eb0590378" containerName="neutron-api" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.152072 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7ba646e-a7ee-4e60-8396-5a9eb0590378" containerName="neutron-api" Jan 31 05:02:28 crc kubenswrapper[4832]: E0131 05:02:28.152108 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee665057-9128-472d-ad63-7701cfa381e8" containerName="probe" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.152115 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee665057-9128-472d-ad63-7701cfa381e8" containerName="probe" Jan 31 05:02:28 crc kubenswrapper[4832]: E0131 05:02:28.152133 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0f7236-580a-416e-b24b-109b3eff6ac0" containerName="dnsmasq-dns" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.152140 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0f7236-580a-416e-b24b-109b3eff6ac0" containerName="dnsmasq-dns" Jan 31 05:02:28 crc kubenswrapper[4832]: E0131 05:02:28.152161 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee665057-9128-472d-ad63-7701cfa381e8" containerName="cinder-scheduler" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.152169 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee665057-9128-472d-ad63-7701cfa381e8" containerName="cinder-scheduler" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.153510 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7ba646e-a7ee-4e60-8396-5a9eb0590378" containerName="neutron-httpd" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.153535 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee665057-9128-472d-ad63-7701cfa381e8" containerName="cinder-scheduler" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.153552 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0f7236-580a-416e-b24b-109b3eff6ac0" containerName="dnsmasq-dns" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.153580 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee665057-9128-472d-ad63-7701cfa381e8" containerName="probe" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.153600 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7ba646e-a7ee-4e60-8396-5a9eb0590378" containerName="neutron-api" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.155451 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.164399 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.182295 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.202704 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b153e2-4087-4707-a751-3b518f670193-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.202771 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4b153e2-4087-4707-a751-3b518f670193-scripts\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.202801 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4b153e2-4087-4707-a751-3b518f670193-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.202868 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmsj5\" (UniqueName: \"kubernetes.io/projected/d4b153e2-4087-4707-a751-3b518f670193-kube-api-access-qmsj5\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.202928 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b153e2-4087-4707-a751-3b518f670193-config-data\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.203007 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4b153e2-4087-4707-a751-3b518f670193-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.306095 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmsj5\" (UniqueName: \"kubernetes.io/projected/d4b153e2-4087-4707-a751-3b518f670193-kube-api-access-qmsj5\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.306197 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b153e2-4087-4707-a751-3b518f670193-config-data\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.306284 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4b153e2-4087-4707-a751-3b518f670193-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.306396 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b153e2-4087-4707-a751-3b518f670193-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.306421 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4b153e2-4087-4707-a751-3b518f670193-scripts\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.306448 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4b153e2-4087-4707-a751-3b518f670193-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.306598 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4b153e2-4087-4707-a751-3b518f670193-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.313591 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b153e2-4087-4707-a751-3b518f670193-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.315389 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4b153e2-4087-4707-a751-3b518f670193-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.315924 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b153e2-4087-4707-a751-3b518f670193-config-data\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.317198 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4b153e2-4087-4707-a751-3b518f670193-scripts\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.327532 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmsj5\" (UniqueName: \"kubernetes.io/projected/d4b153e2-4087-4707-a751-3b518f670193-kube-api-access-qmsj5\") pod \"cinder-scheduler-0\" (UID: \"d4b153e2-4087-4707-a751-3b518f670193\") " pod="openstack/cinder-scheduler-0" Jan 31 05:02:28 crc kubenswrapper[4832]: I0131 05:02:28.535809 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 05:02:29 crc kubenswrapper[4832]: I0131 05:02:29.040055 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 05:02:29 crc kubenswrapper[4832]: I0131 05:02:29.071614 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d4b153e2-4087-4707-a751-3b518f670193","Type":"ContainerStarted","Data":"44d5f645040d8b27acdb9e514453e09b512b7894180cc5be9452386b5f3b695f"} Jan 31 05:02:29 crc kubenswrapper[4832]: I0131 05:02:29.872810 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee665057-9128-472d-ad63-7701cfa381e8" path="/var/lib/kubelet/pods/ee665057-9128-472d-ad63-7701cfa381e8/volumes" Jan 31 05:02:30 crc kubenswrapper[4832]: I0131 05:02:30.099533 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d4b153e2-4087-4707-a751-3b518f670193","Type":"ContainerStarted","Data":"7b9d03f6decc0e5bd7d7905901655d57eac3973015944a2a840d8008b03b3a47"} Jan 31 05:02:30 crc kubenswrapper[4832]: I0131 05:02:30.371253 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 31 05:02:31 crc kubenswrapper[4832]: I0131 05:02:31.056970 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d775c5b5d-5df9r" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 05:02:31 crc kubenswrapper[4832]: I0131 05:02:31.111841 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d4b153e2-4087-4707-a751-3b518f670193","Type":"ContainerStarted","Data":"cee91f04f2264afd23bad0efcb73bf98dff140687e5c4cbc982635f7c8bf27b2"} Jan 31 05:02:31 crc kubenswrapper[4832]: I0131 05:02:31.846104 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-79bb65dc58-kdbq7" Jan 31 05:02:31 crc kubenswrapper[4832]: I0131 05:02:31.890625 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.890599915 podStartE2EDuration="3.890599915s" podCreationTimestamp="2026-01-31 05:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:02:31.139472767 +0000 UTC m=+1160.088294512" watchObservedRunningTime="2026-01-31 05:02:31.890599915 +0000 UTC m=+1160.839421600" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.079809 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.082230 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.089118 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.089376 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.089530 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-kdkx4" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.094859 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.185405 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz9hq\" (UniqueName: \"kubernetes.io/projected/1847eb5f-c952-4d08-8579-786994ad5c56-kube-api-access-bz9hq\") pod \"openstackclient\" (UID: \"1847eb5f-c952-4d08-8579-786994ad5c56\") " pod="openstack/openstackclient" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.185514 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1847eb5f-c952-4d08-8579-786994ad5c56-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1847eb5f-c952-4d08-8579-786994ad5c56\") " pod="openstack/openstackclient" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.185552 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1847eb5f-c952-4d08-8579-786994ad5c56-openstack-config\") pod \"openstackclient\" (UID: \"1847eb5f-c952-4d08-8579-786994ad5c56\") " pod="openstack/openstackclient" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.185675 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1847eb5f-c952-4d08-8579-786994ad5c56-openstack-config-secret\") pod \"openstackclient\" (UID: \"1847eb5f-c952-4d08-8579-786994ad5c56\") " pod="openstack/openstackclient" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.287044 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz9hq\" (UniqueName: \"kubernetes.io/projected/1847eb5f-c952-4d08-8579-786994ad5c56-kube-api-access-bz9hq\") pod \"openstackclient\" (UID: \"1847eb5f-c952-4d08-8579-786994ad5c56\") " pod="openstack/openstackclient" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.287172 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1847eb5f-c952-4d08-8579-786994ad5c56-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1847eb5f-c952-4d08-8579-786994ad5c56\") " pod="openstack/openstackclient" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.287212 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1847eb5f-c952-4d08-8579-786994ad5c56-openstack-config\") pod \"openstackclient\" (UID: \"1847eb5f-c952-4d08-8579-786994ad5c56\") " pod="openstack/openstackclient" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.287307 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1847eb5f-c952-4d08-8579-786994ad5c56-openstack-config-secret\") pod \"openstackclient\" (UID: \"1847eb5f-c952-4d08-8579-786994ad5c56\") " pod="openstack/openstackclient" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.290051 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1847eb5f-c952-4d08-8579-786994ad5c56-openstack-config\") pod \"openstackclient\" (UID: \"1847eb5f-c952-4d08-8579-786994ad5c56\") " pod="openstack/openstackclient" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.295093 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1847eb5f-c952-4d08-8579-786994ad5c56-openstack-config-secret\") pod \"openstackclient\" (UID: \"1847eb5f-c952-4d08-8579-786994ad5c56\") " pod="openstack/openstackclient" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.296339 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1847eb5f-c952-4d08-8579-786994ad5c56-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1847eb5f-c952-4d08-8579-786994ad5c56\") " pod="openstack/openstackclient" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.315681 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz9hq\" (UniqueName: \"kubernetes.io/projected/1847eb5f-c952-4d08-8579-786994ad5c56-kube-api-access-bz9hq\") pod \"openstackclient\" (UID: \"1847eb5f-c952-4d08-8579-786994ad5c56\") " pod="openstack/openstackclient" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.407846 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.456878 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d775c5b5d-5df9r" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:46230->10.217.0.163:9311: read: connection reset by peer" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.457234 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d775c5b5d-5df9r" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:46240->10.217.0.163:9311: read: connection reset by peer" Jan 31 05:02:32 crc kubenswrapper[4832]: I0131 05:02:32.457310 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.031289 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.132511 4832 generic.go:334] "Generic (PLEG): container finished" podID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerID="a3f0098f01cd9efd61d11f50e7f0d883ca02df7cb3f7e9eff6bc7a6171800fd4" exitCode=0 Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.132602 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d775c5b5d-5df9r" event={"ID":"6f0e1524-0d91-4078-93f4-9c80f630d106","Type":"ContainerDied","Data":"a3f0098f01cd9efd61d11f50e7f0d883ca02df7cb3f7e9eff6bc7a6171800fd4"} Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.132634 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d775c5b5d-5df9r" event={"ID":"6f0e1524-0d91-4078-93f4-9c80f630d106","Type":"ContainerDied","Data":"6ddc2c6c7d133540e03d8ee3a26b1b7b16544ccaf97f327498172dab16ef6f00"} Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.132645 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ddc2c6c7d133540e03d8ee3a26b1b7b16544ccaf97f327498172dab16ef6f00" Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.132779 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.152987 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1847eb5f-c952-4d08-8579-786994ad5c56","Type":"ContainerStarted","Data":"075fa778a035f59aeb86dd9ed8894089d8bc27101c11e125b826bd8be33d44bd"} Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.317736 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-config-data\") pod \"6f0e1524-0d91-4078-93f4-9c80f630d106\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.317816 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-combined-ca-bundle\") pod \"6f0e1524-0d91-4078-93f4-9c80f630d106\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.318077 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmvjs\" (UniqueName: \"kubernetes.io/projected/6f0e1524-0d91-4078-93f4-9c80f630d106-kube-api-access-dmvjs\") pod \"6f0e1524-0d91-4078-93f4-9c80f630d106\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.318171 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-config-data-custom\") pod \"6f0e1524-0d91-4078-93f4-9c80f630d106\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.318201 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0e1524-0d91-4078-93f4-9c80f630d106-logs\") pod \"6f0e1524-0d91-4078-93f4-9c80f630d106\" (UID: \"6f0e1524-0d91-4078-93f4-9c80f630d106\") " Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.319430 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f0e1524-0d91-4078-93f4-9c80f630d106-logs" (OuterVolumeSpecName: "logs") pod "6f0e1524-0d91-4078-93f4-9c80f630d106" (UID: "6f0e1524-0d91-4078-93f4-9c80f630d106"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.325834 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f0e1524-0d91-4078-93f4-9c80f630d106-kube-api-access-dmvjs" (OuterVolumeSpecName: "kube-api-access-dmvjs") pod "6f0e1524-0d91-4078-93f4-9c80f630d106" (UID: "6f0e1524-0d91-4078-93f4-9c80f630d106"). InnerVolumeSpecName "kube-api-access-dmvjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.330764 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6f0e1524-0d91-4078-93f4-9c80f630d106" (UID: "6f0e1524-0d91-4078-93f4-9c80f630d106"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.372410 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f0e1524-0d91-4078-93f4-9c80f630d106" (UID: "6f0e1524-0d91-4078-93f4-9c80f630d106"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.390938 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-config-data" (OuterVolumeSpecName: "config-data") pod "6f0e1524-0d91-4078-93f4-9c80f630d106" (UID: "6f0e1524-0d91-4078-93f4-9c80f630d106"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.422500 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.422542 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.422557 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmvjs\" (UniqueName: \"kubernetes.io/projected/6f0e1524-0d91-4078-93f4-9c80f630d106-kube-api-access-dmvjs\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.422579 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f0e1524-0d91-4078-93f4-9c80f630d106-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.422589 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0e1524-0d91-4078-93f4-9c80f630d106-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:33 crc kubenswrapper[4832]: I0131 05:02:33.536633 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 05:02:34 crc kubenswrapper[4832]: I0131 05:02:34.164467 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d775c5b5d-5df9r" Jan 31 05:02:34 crc kubenswrapper[4832]: I0131 05:02:34.194931 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d775c5b5d-5df9r"] Jan 31 05:02:34 crc kubenswrapper[4832]: I0131 05:02:34.201868 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-d775c5b5d-5df9r"] Jan 31 05:02:35 crc kubenswrapper[4832]: I0131 05:02:35.871026 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" path="/var/lib/kubelet/pods/6f0e1524-0d91-4078-93f4-9c80f630d106/volumes" Jan 31 05:02:36 crc kubenswrapper[4832]: I0131 05:02:36.304090 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7fd59dbb48-vjkkx" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.342768 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.343575 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="ceilometer-central-agent" containerID="cri-o://36c56f17cad3e341c219f1f4020fb202ca3a55baafc71069fba93db9cdc4ddd0" gracePeriod=30 Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.343998 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="ceilometer-notification-agent" containerID="cri-o://2599f104f4ab1d160c4382aa5f79a8263dc6e9baff66f5ae5d89fd9c6eda8140" gracePeriod=30 Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.344056 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="sg-core" containerID="cri-o://42f45a79a158dfa648f602e7c7f474d6bf6ca6d88c97225468cd07d29e85ec1c" gracePeriod=30 Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.344084 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="proxy-httpd" containerID="cri-o://34b5c03e8b68de0991b7934d7d9049a34bd94b3fb751635efd383d98664264e4" gracePeriod=30 Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.358377 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.170:3000/\": EOF" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.441790 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6df44bf7d7-6dwfp"] Jan 31 05:02:37 crc kubenswrapper[4832]: E0131 05:02:37.442246 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerName="barbican-api" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.442262 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerName="barbican-api" Jan 31 05:02:37 crc kubenswrapper[4832]: E0131 05:02:37.442295 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerName="barbican-api-log" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.442303 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerName="barbican-api-log" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.442509 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerName="barbican-api" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.442524 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0e1524-0d91-4078-93f4-9c80f630d106" containerName="barbican-api-log" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.452686 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.455597 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.455639 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.466117 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6df44bf7d7-6dwfp"] Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.472290 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.618842 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a8780918-f34b-41e5-9ccc-d12823931da5-etc-swift\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.619346 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8780918-f34b-41e5-9ccc-d12823931da5-public-tls-certs\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.619397 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8780918-f34b-41e5-9ccc-d12823931da5-internal-tls-certs\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.619469 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8780918-f34b-41e5-9ccc-d12823931da5-run-httpd\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.619658 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8780918-f34b-41e5-9ccc-d12823931da5-config-data\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.619748 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjkrg\" (UniqueName: \"kubernetes.io/projected/a8780918-f34b-41e5-9ccc-d12823931da5-kube-api-access-qjkrg\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.619831 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8780918-f34b-41e5-9ccc-d12823931da5-log-httpd\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.619858 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8780918-f34b-41e5-9ccc-d12823931da5-combined-ca-bundle\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.721376 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8780918-f34b-41e5-9ccc-d12823931da5-run-httpd\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.721464 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8780918-f34b-41e5-9ccc-d12823931da5-config-data\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.721501 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjkrg\" (UniqueName: \"kubernetes.io/projected/a8780918-f34b-41e5-9ccc-d12823931da5-kube-api-access-qjkrg\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.721633 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8780918-f34b-41e5-9ccc-d12823931da5-log-httpd\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.721663 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8780918-f34b-41e5-9ccc-d12823931da5-combined-ca-bundle\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.721693 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a8780918-f34b-41e5-9ccc-d12823931da5-etc-swift\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.721726 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8780918-f34b-41e5-9ccc-d12823931da5-public-tls-certs\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.721784 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8780918-f34b-41e5-9ccc-d12823931da5-internal-tls-certs\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.722125 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8780918-f34b-41e5-9ccc-d12823931da5-run-httpd\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.722384 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8780918-f34b-41e5-9ccc-d12823931da5-log-httpd\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.730413 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8780918-f34b-41e5-9ccc-d12823931da5-internal-tls-certs\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.730765 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8780918-f34b-41e5-9ccc-d12823931da5-public-tls-certs\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.731705 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8780918-f34b-41e5-9ccc-d12823931da5-config-data\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.733006 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8780918-f34b-41e5-9ccc-d12823931da5-combined-ca-bundle\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.740240 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a8780918-f34b-41e5-9ccc-d12823931da5-etc-swift\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.746742 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjkrg\" (UniqueName: \"kubernetes.io/projected/a8780918-f34b-41e5-9ccc-d12823931da5-kube-api-access-qjkrg\") pod \"swift-proxy-6df44bf7d7-6dwfp\" (UID: \"a8780918-f34b-41e5-9ccc-d12823931da5\") " pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:37 crc kubenswrapper[4832]: I0131 05:02:37.841105 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.215322 4832 generic.go:334] "Generic (PLEG): container finished" podID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerID="34b5c03e8b68de0991b7934d7d9049a34bd94b3fb751635efd383d98664264e4" exitCode=0 Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.216158 4832 generic.go:334] "Generic (PLEG): container finished" podID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerID="42f45a79a158dfa648f602e7c7f474d6bf6ca6d88c97225468cd07d29e85ec1c" exitCode=2 Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.216172 4832 generic.go:334] "Generic (PLEG): container finished" podID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerID="36c56f17cad3e341c219f1f4020fb202ca3a55baafc71069fba93db9cdc4ddd0" exitCode=0 Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.215422 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef4aa6cc-0613-4424-bfe8-4ff749661c27","Type":"ContainerDied","Data":"34b5c03e8b68de0991b7934d7d9049a34bd94b3fb751635efd383d98664264e4"} Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.216228 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef4aa6cc-0613-4424-bfe8-4ff749661c27","Type":"ContainerDied","Data":"42f45a79a158dfa648f602e7c7f474d6bf6ca6d88c97225468cd07d29e85ec1c"} Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.216250 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef4aa6cc-0613-4424-bfe8-4ff749661c27","Type":"ContainerDied","Data":"36c56f17cad3e341c219f1f4020fb202ca3a55baafc71069fba93db9cdc4ddd0"} Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.382264 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-k4pzp"] Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.388456 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k4pzp" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.415644 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-k4pzp"] Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.497013 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-5x7l7"] Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.498760 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5x7l7" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.522969 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5x7l7"] Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.543097 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rplzm\" (UniqueName: \"kubernetes.io/projected/4901d831-7d96-44f6-afde-ab64894754b5-kube-api-access-rplzm\") pod \"nova-api-db-create-k4pzp\" (UID: \"4901d831-7d96-44f6-afde-ab64894754b5\") " pod="openstack/nova-api-db-create-k4pzp" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.543250 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4901d831-7d96-44f6-afde-ab64894754b5-operator-scripts\") pod \"nova-api-db-create-k4pzp\" (UID: \"4901d831-7d96-44f6-afde-ab64894754b5\") " pod="openstack/nova-api-db-create-k4pzp" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.551100 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6df44bf7d7-6dwfp"] Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.579932 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jwzlw"] Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.581435 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jwzlw" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.599076 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3153-account-create-update-xpnkf"] Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.600824 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3153-account-create-update-xpnkf" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.604166 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.618222 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jwzlw"] Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.635255 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3153-account-create-update-xpnkf"] Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.645367 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rplzm\" (UniqueName: \"kubernetes.io/projected/4901d831-7d96-44f6-afde-ab64894754b5-kube-api-access-rplzm\") pod \"nova-api-db-create-k4pzp\" (UID: \"4901d831-7d96-44f6-afde-ab64894754b5\") " pod="openstack/nova-api-db-create-k4pzp" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.645453 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77qtn\" (UniqueName: \"kubernetes.io/projected/a6913a01-865b-47e6-bf86-fe0ecfc7ea42-kube-api-access-77qtn\") pod \"nova-cell0-db-create-5x7l7\" (UID: \"a6913a01-865b-47e6-bf86-fe0ecfc7ea42\") " pod="openstack/nova-cell0-db-create-5x7l7" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.645481 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4901d831-7d96-44f6-afde-ab64894754b5-operator-scripts\") pod \"nova-api-db-create-k4pzp\" (UID: \"4901d831-7d96-44f6-afde-ab64894754b5\") " pod="openstack/nova-api-db-create-k4pzp" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.645525 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6913a01-865b-47e6-bf86-fe0ecfc7ea42-operator-scripts\") pod \"nova-cell0-db-create-5x7l7\" (UID: \"a6913a01-865b-47e6-bf86-fe0ecfc7ea42\") " pod="openstack/nova-cell0-db-create-5x7l7" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.646587 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4901d831-7d96-44f6-afde-ab64894754b5-operator-scripts\") pod \"nova-api-db-create-k4pzp\" (UID: \"4901d831-7d96-44f6-afde-ab64894754b5\") " pod="openstack/nova-api-db-create-k4pzp" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.688425 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rplzm\" (UniqueName: \"kubernetes.io/projected/4901d831-7d96-44f6-afde-ab64894754b5-kube-api-access-rplzm\") pod \"nova-api-db-create-k4pzp\" (UID: \"4901d831-7d96-44f6-afde-ab64894754b5\") " pod="openstack/nova-api-db-create-k4pzp" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.719292 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k4pzp" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.796149 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tml9v\" (UniqueName: \"kubernetes.io/projected/96a7b311-e7f0-4471-937c-bc6330f4c1c5-kube-api-access-tml9v\") pod \"nova-cell1-db-create-jwzlw\" (UID: \"96a7b311-e7f0-4471-937c-bc6330f4c1c5\") " pod="openstack/nova-cell1-db-create-jwzlw" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.796574 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rmts\" (UniqueName: \"kubernetes.io/projected/be592430-58f0-43c6-b740-fd413081c5c4-kube-api-access-8rmts\") pod \"nova-api-3153-account-create-update-xpnkf\" (UID: \"be592430-58f0-43c6-b740-fd413081c5c4\") " pod="openstack/nova-api-3153-account-create-update-xpnkf" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.796807 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be592430-58f0-43c6-b740-fd413081c5c4-operator-scripts\") pod \"nova-api-3153-account-create-update-xpnkf\" (UID: \"be592430-58f0-43c6-b740-fd413081c5c4\") " pod="openstack/nova-api-3153-account-create-update-xpnkf" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.796984 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77qtn\" (UniqueName: \"kubernetes.io/projected/a6913a01-865b-47e6-bf86-fe0ecfc7ea42-kube-api-access-77qtn\") pod \"nova-cell0-db-create-5x7l7\" (UID: \"a6913a01-865b-47e6-bf86-fe0ecfc7ea42\") " pod="openstack/nova-cell0-db-create-5x7l7" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.797120 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96a7b311-e7f0-4471-937c-bc6330f4c1c5-operator-scripts\") pod \"nova-cell1-db-create-jwzlw\" (UID: \"96a7b311-e7f0-4471-937c-bc6330f4c1c5\") " pod="openstack/nova-cell1-db-create-jwzlw" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.797322 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6913a01-865b-47e6-bf86-fe0ecfc7ea42-operator-scripts\") pod \"nova-cell0-db-create-5x7l7\" (UID: \"a6913a01-865b-47e6-bf86-fe0ecfc7ea42\") " pod="openstack/nova-cell0-db-create-5x7l7" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.798580 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6913a01-865b-47e6-bf86-fe0ecfc7ea42-operator-scripts\") pod \"nova-cell0-db-create-5x7l7\" (UID: \"a6913a01-865b-47e6-bf86-fe0ecfc7ea42\") " pod="openstack/nova-cell0-db-create-5x7l7" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.816583 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cce3-account-create-update-5j49p"] Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.821037 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cce3-account-create-update-5j49p" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.823835 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.841762 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cce3-account-create-update-5j49p"] Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.850240 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77qtn\" (UniqueName: \"kubernetes.io/projected/a6913a01-865b-47e6-bf86-fe0ecfc7ea42-kube-api-access-77qtn\") pod \"nova-cell0-db-create-5x7l7\" (UID: \"a6913a01-865b-47e6-bf86-fe0ecfc7ea42\") " pod="openstack/nova-cell0-db-create-5x7l7" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.900423 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgncz\" (UniqueName: \"kubernetes.io/projected/a4e367b5-55d2-4e64-9fda-e9b79fe2684f-kube-api-access-qgncz\") pod \"nova-cell0-cce3-account-create-update-5j49p\" (UID: \"a4e367b5-55d2-4e64-9fda-e9b79fe2684f\") " pod="openstack/nova-cell0-cce3-account-create-update-5j49p" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.900486 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96a7b311-e7f0-4471-937c-bc6330f4c1c5-operator-scripts\") pod \"nova-cell1-db-create-jwzlw\" (UID: \"96a7b311-e7f0-4471-937c-bc6330f4c1c5\") " pod="openstack/nova-cell1-db-create-jwzlw" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.900628 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tml9v\" (UniqueName: \"kubernetes.io/projected/96a7b311-e7f0-4471-937c-bc6330f4c1c5-kube-api-access-tml9v\") pod \"nova-cell1-db-create-jwzlw\" (UID: \"96a7b311-e7f0-4471-937c-bc6330f4c1c5\") " pod="openstack/nova-cell1-db-create-jwzlw" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.900650 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4e367b5-55d2-4e64-9fda-e9b79fe2684f-operator-scripts\") pod \"nova-cell0-cce3-account-create-update-5j49p\" (UID: \"a4e367b5-55d2-4e64-9fda-e9b79fe2684f\") " pod="openstack/nova-cell0-cce3-account-create-update-5j49p" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.900687 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rmts\" (UniqueName: \"kubernetes.io/projected/be592430-58f0-43c6-b740-fd413081c5c4-kube-api-access-8rmts\") pod \"nova-api-3153-account-create-update-xpnkf\" (UID: \"be592430-58f0-43c6-b740-fd413081c5c4\") " pod="openstack/nova-api-3153-account-create-update-xpnkf" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.900731 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be592430-58f0-43c6-b740-fd413081c5c4-operator-scripts\") pod \"nova-api-3153-account-create-update-xpnkf\" (UID: \"be592430-58f0-43c6-b740-fd413081c5c4\") " pod="openstack/nova-api-3153-account-create-update-xpnkf" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.901520 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be592430-58f0-43c6-b740-fd413081c5c4-operator-scripts\") pod \"nova-api-3153-account-create-update-xpnkf\" (UID: \"be592430-58f0-43c6-b740-fd413081c5c4\") " pod="openstack/nova-api-3153-account-create-update-xpnkf" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.902066 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96a7b311-e7f0-4471-937c-bc6330f4c1c5-operator-scripts\") pod \"nova-cell1-db-create-jwzlw\" (UID: \"96a7b311-e7f0-4471-937c-bc6330f4c1c5\") " pod="openstack/nova-cell1-db-create-jwzlw" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.924372 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rmts\" (UniqueName: \"kubernetes.io/projected/be592430-58f0-43c6-b740-fd413081c5c4-kube-api-access-8rmts\") pod \"nova-api-3153-account-create-update-xpnkf\" (UID: \"be592430-58f0-43c6-b740-fd413081c5c4\") " pod="openstack/nova-api-3153-account-create-update-xpnkf" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.924551 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tml9v\" (UniqueName: \"kubernetes.io/projected/96a7b311-e7f0-4471-937c-bc6330f4c1c5-kube-api-access-tml9v\") pod \"nova-cell1-db-create-jwzlw\" (UID: \"96a7b311-e7f0-4471-937c-bc6330f4c1c5\") " pod="openstack/nova-cell1-db-create-jwzlw" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.936153 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3153-account-create-update-xpnkf" Jan 31 05:02:38 crc kubenswrapper[4832]: I0131 05:02:38.952935 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.003198 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4e367b5-55d2-4e64-9fda-e9b79fe2684f-operator-scripts\") pod \"nova-cell0-cce3-account-create-update-5j49p\" (UID: \"a4e367b5-55d2-4e64-9fda-e9b79fe2684f\") " pod="openstack/nova-cell0-cce3-account-create-update-5j49p" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.003341 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgncz\" (UniqueName: \"kubernetes.io/projected/a4e367b5-55d2-4e64-9fda-e9b79fe2684f-kube-api-access-qgncz\") pod \"nova-cell0-cce3-account-create-update-5j49p\" (UID: \"a4e367b5-55d2-4e64-9fda-e9b79fe2684f\") " pod="openstack/nova-cell0-cce3-account-create-update-5j49p" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.005148 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4e367b5-55d2-4e64-9fda-e9b79fe2684f-operator-scripts\") pod \"nova-cell0-cce3-account-create-update-5j49p\" (UID: \"a4e367b5-55d2-4e64-9fda-e9b79fe2684f\") " pod="openstack/nova-cell0-cce3-account-create-update-5j49p" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.006503 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7db2-account-create-update-gjdrd"] Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.008127 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7db2-account-create-update-gjdrd" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.013575 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.026844 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7db2-account-create-update-gjdrd"] Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.044498 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgncz\" (UniqueName: \"kubernetes.io/projected/a4e367b5-55d2-4e64-9fda-e9b79fe2684f-kube-api-access-qgncz\") pod \"nova-cell0-cce3-account-create-update-5j49p\" (UID: \"a4e367b5-55d2-4e64-9fda-e9b79fe2684f\") " pod="openstack/nova-cell0-cce3-account-create-update-5j49p" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.105457 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d15a8ae3-e07a-4f6d-9364-a847ec46f620-operator-scripts\") pod \"nova-cell1-7db2-account-create-update-gjdrd\" (UID: \"d15a8ae3-e07a-4f6d-9364-a847ec46f620\") " pod="openstack/nova-cell1-7db2-account-create-update-gjdrd" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.105932 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqmq4\" (UniqueName: \"kubernetes.io/projected/d15a8ae3-e07a-4f6d-9364-a847ec46f620-kube-api-access-fqmq4\") pod \"nova-cell1-7db2-account-create-update-gjdrd\" (UID: \"d15a8ae3-e07a-4f6d-9364-a847ec46f620\") " pod="openstack/nova-cell1-7db2-account-create-update-gjdrd" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.141706 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5x7l7" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.207435 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d15a8ae3-e07a-4f6d-9364-a847ec46f620-operator-scripts\") pod \"nova-cell1-7db2-account-create-update-gjdrd\" (UID: \"d15a8ae3-e07a-4f6d-9364-a847ec46f620\") " pod="openstack/nova-cell1-7db2-account-create-update-gjdrd" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.207592 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqmq4\" (UniqueName: \"kubernetes.io/projected/d15a8ae3-e07a-4f6d-9364-a847ec46f620-kube-api-access-fqmq4\") pod \"nova-cell1-7db2-account-create-update-gjdrd\" (UID: \"d15a8ae3-e07a-4f6d-9364-a847ec46f620\") " pod="openstack/nova-cell1-7db2-account-create-update-gjdrd" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.208650 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d15a8ae3-e07a-4f6d-9364-a847ec46f620-operator-scripts\") pod \"nova-cell1-7db2-account-create-update-gjdrd\" (UID: \"d15a8ae3-e07a-4f6d-9364-a847ec46f620\") " pod="openstack/nova-cell1-7db2-account-create-update-gjdrd" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.217144 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jwzlw" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.231693 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqmq4\" (UniqueName: \"kubernetes.io/projected/d15a8ae3-e07a-4f6d-9364-a847ec46f620-kube-api-access-fqmq4\") pod \"nova-cell1-7db2-account-create-update-gjdrd\" (UID: \"d15a8ae3-e07a-4f6d-9364-a847ec46f620\") " pod="openstack/nova-cell1-7db2-account-create-update-gjdrd" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.338114 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cce3-account-create-update-5j49p" Jan 31 05:02:39 crc kubenswrapper[4832]: I0131 05:02:39.355041 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7db2-account-create-update-gjdrd" Jan 31 05:02:41 crc kubenswrapper[4832]: I0131 05:02:41.264612 4832 generic.go:334] "Generic (PLEG): container finished" podID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerID="2599f104f4ab1d160c4382aa5f79a8263dc6e9baff66f5ae5d89fd9c6eda8140" exitCode=0 Jan 31 05:02:41 crc kubenswrapper[4832]: I0131 05:02:41.264690 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef4aa6cc-0613-4424-bfe8-4ff749661c27","Type":"ContainerDied","Data":"2599f104f4ab1d160c4382aa5f79a8263dc6e9baff66f5ae5d89fd9c6eda8140"} Jan 31 05:02:45 crc kubenswrapper[4832]: W0131 05:02:45.002077 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8780918_f34b_41e5_9ccc_d12823931da5.slice/crio-bd77eeb6675d9d49c41a82510d251305d5fa4abce7091064c47260d0defd61da WatchSource:0}: Error finding container bd77eeb6675d9d49c41a82510d251305d5fa4abce7091064c47260d0defd61da: Status 404 returned error can't find the container with id bd77eeb6675d9d49c41a82510d251305d5fa4abce7091064c47260d0defd61da Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.055162 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c7c54d8bf-w9s7x" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.134598 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c45f655bb-2jmz9"] Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.134989 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-c45f655bb-2jmz9" podUID="e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" containerName="neutron-api" containerID="cri-o://0adb770fffda48e8d6e6a7859b642de3672c588b793b6c40c38872ab6c2059b7" gracePeriod=30 Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.135580 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-c45f655bb-2jmz9" podUID="e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" containerName="neutron-httpd" containerID="cri-o://f2b58cf48366cb8fa2e610a60a2e1dd027dc03184f8b9d6d9337d9b04dd2e448" gracePeriod=30 Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.332523 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6df44bf7d7-6dwfp" event={"ID":"a8780918-f34b-41e5-9ccc-d12823931da5","Type":"ContainerStarted","Data":"bd77eeb6675d9d49c41a82510d251305d5fa4abce7091064c47260d0defd61da"} Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.368846 4832 generic.go:334] "Generic (PLEG): container finished" podID="e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" containerID="f2b58cf48366cb8fa2e610a60a2e1dd027dc03184f8b9d6d9337d9b04dd2e448" exitCode=0 Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.368928 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c45f655bb-2jmz9" event={"ID":"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0","Type":"ContainerDied","Data":"f2b58cf48366cb8fa2e610a60a2e1dd027dc03184f8b9d6d9337d9b04dd2e448"} Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.435929 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.458696 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef4aa6cc-0613-4424-bfe8-4ff749661c27-run-httpd\") pod \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.458763 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-sg-core-conf-yaml\") pod \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.458792 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-combined-ca-bundle\") pod \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.460070 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef4aa6cc-0613-4424-bfe8-4ff749661c27-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ef4aa6cc-0613-4424-bfe8-4ff749661c27" (UID: "ef4aa6cc-0613-4424-bfe8-4ff749661c27"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.503794 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ef4aa6cc-0613-4424-bfe8-4ff749661c27" (UID: "ef4aa6cc-0613-4424-bfe8-4ff749661c27"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.560151 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef4aa6cc-0613-4424-bfe8-4ff749661c27-log-httpd\") pod \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.560655 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9wjl\" (UniqueName: \"kubernetes.io/projected/ef4aa6cc-0613-4424-bfe8-4ff749661c27-kube-api-access-t9wjl\") pod \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.560739 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-scripts\") pod \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.560936 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef4aa6cc-0613-4424-bfe8-4ff749661c27-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ef4aa6cc-0613-4424-bfe8-4ff749661c27" (UID: "ef4aa6cc-0613-4424-bfe8-4ff749661c27"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.561086 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-config-data\") pod \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\" (UID: \"ef4aa6cc-0613-4424-bfe8-4ff749661c27\") " Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.563793 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef4aa6cc-0613-4424-bfe8-4ff749661c27-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.563821 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef4aa6cc-0613-4424-bfe8-4ff749661c27-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.563832 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.588351 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-scripts" (OuterVolumeSpecName: "scripts") pod "ef4aa6cc-0613-4424-bfe8-4ff749661c27" (UID: "ef4aa6cc-0613-4424-bfe8-4ff749661c27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.593589 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef4aa6cc-0613-4424-bfe8-4ff749661c27-kube-api-access-t9wjl" (OuterVolumeSpecName: "kube-api-access-t9wjl") pod "ef4aa6cc-0613-4424-bfe8-4ff749661c27" (UID: "ef4aa6cc-0613-4424-bfe8-4ff749661c27"). InnerVolumeSpecName "kube-api-access-t9wjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.640695 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef4aa6cc-0613-4424-bfe8-4ff749661c27" (UID: "ef4aa6cc-0613-4424-bfe8-4ff749661c27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.669719 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.675645 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.675679 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9wjl\" (UniqueName: \"kubernetes.io/projected/ef4aa6cc-0613-4424-bfe8-4ff749661c27-kube-api-access-t9wjl\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.772051 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-config-data" (OuterVolumeSpecName: "config-data") pod "ef4aa6cc-0613-4424-bfe8-4ff749661c27" (UID: "ef4aa6cc-0613-4424-bfe8-4ff749661c27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.780254 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef4aa6cc-0613-4424-bfe8-4ff749661c27-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.965369 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-k4pzp"] Jan 31 05:02:45 crc kubenswrapper[4832]: W0131 05:02:45.978228 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4901d831_7d96_44f6_afde_ab64894754b5.slice/crio-9c2b6c4ce8da56032bc0564a3eb2a524e7ec823e0a36ccd9421069e9d3e8fd05 WatchSource:0}: Error finding container 9c2b6c4ce8da56032bc0564a3eb2a524e7ec823e0a36ccd9421069e9d3e8fd05: Status 404 returned error can't find the container with id 9c2b6c4ce8da56032bc0564a3eb2a524e7ec823e0a36ccd9421069e9d3e8fd05 Jan 31 05:02:45 crc kubenswrapper[4832]: I0131 05:02:45.982022 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5x7l7"] Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.222943 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cce3-account-create-update-5j49p"] Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.261041 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7db2-account-create-update-gjdrd"] Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.273084 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jwzlw"] Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.304101 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7fd59dbb48-vjkkx" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.370221 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3153-account-create-update-xpnkf"] Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.410787 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-k4pzp" event={"ID":"4901d831-7d96-44f6-afde-ab64894754b5","Type":"ContainerStarted","Data":"9c2b6c4ce8da56032bc0564a3eb2a524e7ec823e0a36ccd9421069e9d3e8fd05"} Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.415310 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6df44bf7d7-6dwfp" event={"ID":"a8780918-f34b-41e5-9ccc-d12823931da5","Type":"ContainerStarted","Data":"7d0931b86acbfb139413a407ae9f18b42ee6c9ea72eaed273208275eae782aab"} Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.415361 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6df44bf7d7-6dwfp" event={"ID":"a8780918-f34b-41e5-9ccc-d12823931da5","Type":"ContainerStarted","Data":"0403c0255fe8fe4e022b4ef465052deb8bb1a26ba4115e00bc0c18a9854df778"} Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.415662 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.415706 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.422332 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5x7l7" event={"ID":"a6913a01-865b-47e6-bf86-fe0ecfc7ea42","Type":"ContainerStarted","Data":"29ccc0d58767e1e966ca925487fa4d242330df5d66bcb37c316e9e51a7a02a64"} Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.425144 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1847eb5f-c952-4d08-8579-786994ad5c56","Type":"ContainerStarted","Data":"cb4c74c7d44dc22c8ec6843c9bcd5316c2a1f3fcc5188988ba3b3c6ff5566aee"} Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.427554 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef4aa6cc-0613-4424-bfe8-4ff749661c27","Type":"ContainerDied","Data":"1569399ec5f81f4d766407a4d8af1993797db27f287e47e72bb35931be77138e"} Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.427729 4832 scope.go:117] "RemoveContainer" containerID="34b5c03e8b68de0991b7934d7d9049a34bd94b3fb751635efd383d98664264e4" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.427873 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.431133 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cce3-account-create-update-5j49p" event={"ID":"a4e367b5-55d2-4e64-9fda-e9b79fe2684f","Type":"ContainerStarted","Data":"1a36180cc63cd501e77a91a46500b04e8bd2142e016dfed9f689419045792497"} Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.440762 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jwzlw" event={"ID":"96a7b311-e7f0-4471-937c-bc6330f4c1c5","Type":"ContainerStarted","Data":"ebe0023e4610ae90149b3c3522f9d95a5af943fc2a1fd998a11abc2019c86172"} Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.444073 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6df44bf7d7-6dwfp" podStartSLOduration=9.44404616 podStartE2EDuration="9.44404616s" podCreationTimestamp="2026-01-31 05:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:02:46.439136638 +0000 UTC m=+1175.387958343" watchObservedRunningTime="2026-01-31 05:02:46.44404616 +0000 UTC m=+1175.392867865" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.458601 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7db2-account-create-update-gjdrd" event={"ID":"d15a8ae3-e07a-4f6d-9364-a847ec46f620","Type":"ContainerStarted","Data":"a57b5678f1af132e47f8259f593aa25200897602d312f95a4722ae8774f8c9d3"} Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.468779 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.196058229 podStartE2EDuration="14.468749118s" podCreationTimestamp="2026-01-31 05:02:32 +0000 UTC" firstStartedPulling="2026-01-31 05:02:33.055371007 +0000 UTC m=+1162.004192692" lastFinishedPulling="2026-01-31 05:02:45.328061896 +0000 UTC m=+1174.276883581" observedRunningTime="2026-01-31 05:02:46.464819407 +0000 UTC m=+1175.413641092" watchObservedRunningTime="2026-01-31 05:02:46.468749118 +0000 UTC m=+1175.417570803" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.499363 4832 scope.go:117] "RemoveContainer" containerID="42f45a79a158dfa648f602e7c7f474d6bf6ca6d88c97225468cd07d29e85ec1c" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.508736 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.522046 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.531869 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:46 crc kubenswrapper[4832]: E0131 05:02:46.532399 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="proxy-httpd" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.532422 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="proxy-httpd" Jan 31 05:02:46 crc kubenswrapper[4832]: E0131 05:02:46.532450 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="ceilometer-notification-agent" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.532458 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="ceilometer-notification-agent" Jan 31 05:02:46 crc kubenswrapper[4832]: E0131 05:02:46.532475 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="ceilometer-central-agent" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.532483 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="ceilometer-central-agent" Jan 31 05:02:46 crc kubenswrapper[4832]: E0131 05:02:46.532500 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="sg-core" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.532507 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="sg-core" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.532744 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="proxy-httpd" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.532792 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="ceilometer-central-agent" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.532807 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="ceilometer-notification-agent" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.532824 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" containerName="sg-core" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.535995 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.541038 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.541091 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.548424 4832 scope.go:117] "RemoveContainer" containerID="2599f104f4ab1d160c4382aa5f79a8263dc6e9baff66f5ae5d89fd9c6eda8140" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.551641 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.607233 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-scripts\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.607337 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-config-data\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.607386 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/800a85ea-e52f-4735-bebe-47974b9d7732-log-httpd\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.607428 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.607512 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.607576 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69zgm\" (UniqueName: \"kubernetes.io/projected/800a85ea-e52f-4735-bebe-47974b9d7732-kube-api-access-69zgm\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.607664 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/800a85ea-e52f-4735-bebe-47974b9d7732-run-httpd\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.619060 4832 scope.go:117] "RemoveContainer" containerID="36c56f17cad3e341c219f1f4020fb202ca3a55baafc71069fba93db9cdc4ddd0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.711658 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/800a85ea-e52f-4735-bebe-47974b9d7732-run-httpd\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.711894 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-scripts\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.711977 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-config-data\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.712042 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/800a85ea-e52f-4735-bebe-47974b9d7732-log-httpd\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.712102 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.712271 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.712480 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/800a85ea-e52f-4735-bebe-47974b9d7732-run-httpd\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.712610 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69zgm\" (UniqueName: \"kubernetes.io/projected/800a85ea-e52f-4735-bebe-47974b9d7732-kube-api-access-69zgm\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.713160 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/800a85ea-e52f-4735-bebe-47974b9d7732-log-httpd\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.729468 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-scripts\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.730152 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.733298 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69zgm\" (UniqueName: \"kubernetes.io/projected/800a85ea-e52f-4735-bebe-47974b9d7732-kube-api-access-69zgm\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.736509 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-config-data\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.739282 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " pod="openstack/ceilometer-0" Jan 31 05:02:46 crc kubenswrapper[4832]: I0131 05:02:46.860387 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:02:47 crc kubenswrapper[4832]: I0131 05:02:47.476709 4832 generic.go:334] "Generic (PLEG): container finished" podID="a4e367b5-55d2-4e64-9fda-e9b79fe2684f" containerID="f54b20bf046478851aeff7b9746d68e8c7ffd729b6996c9ce97218e17c4ae5b6" exitCode=0 Jan 31 05:02:47 crc kubenswrapper[4832]: I0131 05:02:47.476787 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cce3-account-create-update-5j49p" event={"ID":"a4e367b5-55d2-4e64-9fda-e9b79fe2684f","Type":"ContainerDied","Data":"f54b20bf046478851aeff7b9746d68e8c7ffd729b6996c9ce97218e17c4ae5b6"} Jan 31 05:02:47 crc kubenswrapper[4832]: I0131 05:02:47.480897 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jwzlw" event={"ID":"96a7b311-e7f0-4471-937c-bc6330f4c1c5","Type":"ContainerStarted","Data":"969c559518333a73a6204d199e23eef77f959f6dcf5a46813c2fcc18be6da989"} Jan 31 05:02:47 crc kubenswrapper[4832]: I0131 05:02:47.482420 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7db2-account-create-update-gjdrd" event={"ID":"d15a8ae3-e07a-4f6d-9364-a847ec46f620","Type":"ContainerStarted","Data":"26be16a39c82094bbe783257d7baafd4e46aa9ad9f65ecebdc4baa0a5a19f82e"} Jan 31 05:02:47 crc kubenswrapper[4832]: I0131 05:02:47.495378 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:47 crc kubenswrapper[4832]: I0131 05:02:47.505826 4832 generic.go:334] "Generic (PLEG): container finished" podID="4901d831-7d96-44f6-afde-ab64894754b5" containerID="622d0e18b365feccb5aff103f23927913295c2b63e39705302e5b6e3f4211add" exitCode=0 Jan 31 05:02:47 crc kubenswrapper[4832]: I0131 05:02:47.505945 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-k4pzp" event={"ID":"4901d831-7d96-44f6-afde-ab64894754b5","Type":"ContainerDied","Data":"622d0e18b365feccb5aff103f23927913295c2b63e39705302e5b6e3f4211add"} Jan 31 05:02:47 crc kubenswrapper[4832]: I0131 05:02:47.514341 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3153-account-create-update-xpnkf" event={"ID":"be592430-58f0-43c6-b740-fd413081c5c4","Type":"ContainerStarted","Data":"85b1c8e42122f92e2a6350fb3b7a92d54d2d2edf8463eca5c27de383eb0113a8"} Jan 31 05:02:47 crc kubenswrapper[4832]: I0131 05:02:47.514383 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3153-account-create-update-xpnkf" event={"ID":"be592430-58f0-43c6-b740-fd413081c5c4","Type":"ContainerStarted","Data":"51e3eeac34e46d3acae739acefbc43ba1459e432c09a04ca165faa1e15e4b687"} Jan 31 05:02:47 crc kubenswrapper[4832]: I0131 05:02:47.517796 4832 generic.go:334] "Generic (PLEG): container finished" podID="a6913a01-865b-47e6-bf86-fe0ecfc7ea42" containerID="f9195aa1df7c4c3882ff11144129c01f69aa187ef17accbb673d88d0d273f32e" exitCode=0 Jan 31 05:02:47 crc kubenswrapper[4832]: I0131 05:02:47.518419 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5x7l7" event={"ID":"a6913a01-865b-47e6-bf86-fe0ecfc7ea42","Type":"ContainerDied","Data":"f9195aa1df7c4c3882ff11144129c01f69aa187ef17accbb673d88d0d273f32e"} Jan 31 05:02:47 crc kubenswrapper[4832]: I0131 05:02:47.550466 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-7db2-account-create-update-gjdrd" podStartSLOduration=9.550444747 podStartE2EDuration="9.550444747s" podCreationTimestamp="2026-01-31 05:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:02:47.550415296 +0000 UTC m=+1176.499236981" watchObservedRunningTime="2026-01-31 05:02:47.550444747 +0000 UTC m=+1176.499266422" Jan 31 05:02:47 crc kubenswrapper[4832]: W0131 05:02:47.555614 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod800a85ea_e52f_4735_bebe_47974b9d7732.slice/crio-80e9e8dafa11b91c70835a50bdd72fd415553b5e393c870a7c4d192368287df4 WatchSource:0}: Error finding container 80e9e8dafa11b91c70835a50bdd72fd415553b5e393c870a7c4d192368287df4: Status 404 returned error can't find the container with id 80e9e8dafa11b91c70835a50bdd72fd415553b5e393c870a7c4d192368287df4 Jan 31 05:02:47 crc kubenswrapper[4832]: I0131 05:02:47.609687 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-3153-account-create-update-xpnkf" podStartSLOduration=9.609663328 podStartE2EDuration="9.609663328s" podCreationTimestamp="2026-01-31 05:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:02:47.604688904 +0000 UTC m=+1176.553510589" watchObservedRunningTime="2026-01-31 05:02:47.609663328 +0000 UTC m=+1176.558485023" Jan 31 05:02:47 crc kubenswrapper[4832]: I0131 05:02:47.823520 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:47 crc kubenswrapper[4832]: I0131 05:02:47.871351 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef4aa6cc-0613-4424-bfe8-4ff749661c27" path="/var/lib/kubelet/pods/ef4aa6cc-0613-4424-bfe8-4ff749661c27/volumes" Jan 31 05:02:48 crc kubenswrapper[4832]: I0131 05:02:48.528193 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"800a85ea-e52f-4735-bebe-47974b9d7732","Type":"ContainerStarted","Data":"80e9e8dafa11b91c70835a50bdd72fd415553b5e393c870a7c4d192368287df4"} Jan 31 05:02:48 crc kubenswrapper[4832]: I0131 05:02:48.530593 4832 generic.go:334] "Generic (PLEG): container finished" podID="96a7b311-e7f0-4471-937c-bc6330f4c1c5" containerID="969c559518333a73a6204d199e23eef77f959f6dcf5a46813c2fcc18be6da989" exitCode=0 Jan 31 05:02:48 crc kubenswrapper[4832]: I0131 05:02:48.530766 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jwzlw" event={"ID":"96a7b311-e7f0-4471-937c-bc6330f4c1c5","Type":"ContainerDied","Data":"969c559518333a73a6204d199e23eef77f959f6dcf5a46813c2fcc18be6da989"} Jan 31 05:02:48 crc kubenswrapper[4832]: I0131 05:02:48.536491 4832 generic.go:334] "Generic (PLEG): container finished" podID="d15a8ae3-e07a-4f6d-9364-a847ec46f620" containerID="26be16a39c82094bbe783257d7baafd4e46aa9ad9f65ecebdc4baa0a5a19f82e" exitCode=0 Jan 31 05:02:48 crc kubenswrapper[4832]: I0131 05:02:48.536614 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7db2-account-create-update-gjdrd" event={"ID":"d15a8ae3-e07a-4f6d-9364-a847ec46f620","Type":"ContainerDied","Data":"26be16a39c82094bbe783257d7baafd4e46aa9ad9f65ecebdc4baa0a5a19f82e"} Jan 31 05:02:48 crc kubenswrapper[4832]: I0131 05:02:48.540160 4832 generic.go:334] "Generic (PLEG): container finished" podID="be592430-58f0-43c6-b740-fd413081c5c4" containerID="85b1c8e42122f92e2a6350fb3b7a92d54d2d2edf8463eca5c27de383eb0113a8" exitCode=0 Jan 31 05:02:48 crc kubenswrapper[4832]: I0131 05:02:48.540237 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3153-account-create-update-xpnkf" event={"ID":"be592430-58f0-43c6-b740-fd413081c5c4","Type":"ContainerDied","Data":"85b1c8e42122f92e2a6350fb3b7a92d54d2d2edf8463eca5c27de383eb0113a8"} Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.487503 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cce3-account-create-update-5j49p" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.494755 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5x7l7" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.502355 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k4pzp" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.508645 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jwzlw" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.575669 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5x7l7" event={"ID":"a6913a01-865b-47e6-bf86-fe0ecfc7ea42","Type":"ContainerDied","Data":"29ccc0d58767e1e966ca925487fa4d242330df5d66bcb37c316e9e51a7a02a64"} Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.575729 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29ccc0d58767e1e966ca925487fa4d242330df5d66bcb37c316e9e51a7a02a64" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.575816 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5x7l7" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.580515 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cce3-account-create-update-5j49p" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.580887 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cce3-account-create-update-5j49p" event={"ID":"a4e367b5-55d2-4e64-9fda-e9b79fe2684f","Type":"ContainerDied","Data":"1a36180cc63cd501e77a91a46500b04e8bd2142e016dfed9f689419045792497"} Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.580912 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a36180cc63cd501e77a91a46500b04e8bd2142e016dfed9f689419045792497" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.586034 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jwzlw" event={"ID":"96a7b311-e7f0-4471-937c-bc6330f4c1c5","Type":"ContainerDied","Data":"ebe0023e4610ae90149b3c3522f9d95a5af943fc2a1fd998a11abc2019c86172"} Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.586101 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebe0023e4610ae90149b3c3522f9d95a5af943fc2a1fd998a11abc2019c86172" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.586049 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jwzlw" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.588042 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-k4pzp" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.588251 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-k4pzp" event={"ID":"4901d831-7d96-44f6-afde-ab64894754b5","Type":"ContainerDied","Data":"9c2b6c4ce8da56032bc0564a3eb2a524e7ec823e0a36ccd9421069e9d3e8fd05"} Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.588274 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c2b6c4ce8da56032bc0564a3eb2a524e7ec823e0a36ccd9421069e9d3e8fd05" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.600703 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgncz\" (UniqueName: \"kubernetes.io/projected/a4e367b5-55d2-4e64-9fda-e9b79fe2684f-kube-api-access-qgncz\") pod \"a4e367b5-55d2-4e64-9fda-e9b79fe2684f\" (UID: \"a4e367b5-55d2-4e64-9fda-e9b79fe2684f\") " Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.600809 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6913a01-865b-47e6-bf86-fe0ecfc7ea42-operator-scripts\") pod \"a6913a01-865b-47e6-bf86-fe0ecfc7ea42\" (UID: \"a6913a01-865b-47e6-bf86-fe0ecfc7ea42\") " Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.600861 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77qtn\" (UniqueName: \"kubernetes.io/projected/a6913a01-865b-47e6-bf86-fe0ecfc7ea42-kube-api-access-77qtn\") pod \"a6913a01-865b-47e6-bf86-fe0ecfc7ea42\" (UID: \"a6913a01-865b-47e6-bf86-fe0ecfc7ea42\") " Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.600889 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4e367b5-55d2-4e64-9fda-e9b79fe2684f-operator-scripts\") pod \"a4e367b5-55d2-4e64-9fda-e9b79fe2684f\" (UID: \"a4e367b5-55d2-4e64-9fda-e9b79fe2684f\") " Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.600977 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4901d831-7d96-44f6-afde-ab64894754b5-operator-scripts\") pod \"4901d831-7d96-44f6-afde-ab64894754b5\" (UID: \"4901d831-7d96-44f6-afde-ab64894754b5\") " Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.601255 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rplzm\" (UniqueName: \"kubernetes.io/projected/4901d831-7d96-44f6-afde-ab64894754b5-kube-api-access-rplzm\") pod \"4901d831-7d96-44f6-afde-ab64894754b5\" (UID: \"4901d831-7d96-44f6-afde-ab64894754b5\") " Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.601834 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4e367b5-55d2-4e64-9fda-e9b79fe2684f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4e367b5-55d2-4e64-9fda-e9b79fe2684f" (UID: "a4e367b5-55d2-4e64-9fda-e9b79fe2684f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.601959 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4901d831-7d96-44f6-afde-ab64894754b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4901d831-7d96-44f6-afde-ab64894754b5" (UID: "4901d831-7d96-44f6-afde-ab64894754b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.602766 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6913a01-865b-47e6-bf86-fe0ecfc7ea42-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6913a01-865b-47e6-bf86-fe0ecfc7ea42" (UID: "a6913a01-865b-47e6-bf86-fe0ecfc7ea42"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.622883 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6913a01-865b-47e6-bf86-fe0ecfc7ea42-kube-api-access-77qtn" (OuterVolumeSpecName: "kube-api-access-77qtn") pod "a6913a01-865b-47e6-bf86-fe0ecfc7ea42" (UID: "a6913a01-865b-47e6-bf86-fe0ecfc7ea42"). InnerVolumeSpecName "kube-api-access-77qtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.623016 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4901d831-7d96-44f6-afde-ab64894754b5-kube-api-access-rplzm" (OuterVolumeSpecName: "kube-api-access-rplzm") pod "4901d831-7d96-44f6-afde-ab64894754b5" (UID: "4901d831-7d96-44f6-afde-ab64894754b5"). InnerVolumeSpecName "kube-api-access-rplzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.625655 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e367b5-55d2-4e64-9fda-e9b79fe2684f-kube-api-access-qgncz" (OuterVolumeSpecName: "kube-api-access-qgncz") pod "a4e367b5-55d2-4e64-9fda-e9b79fe2684f" (UID: "a4e367b5-55d2-4e64-9fda-e9b79fe2684f"). InnerVolumeSpecName "kube-api-access-qgncz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.704007 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tml9v\" (UniqueName: \"kubernetes.io/projected/96a7b311-e7f0-4471-937c-bc6330f4c1c5-kube-api-access-tml9v\") pod \"96a7b311-e7f0-4471-937c-bc6330f4c1c5\" (UID: \"96a7b311-e7f0-4471-937c-bc6330f4c1c5\") " Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.704171 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96a7b311-e7f0-4471-937c-bc6330f4c1c5-operator-scripts\") pod \"96a7b311-e7f0-4471-937c-bc6330f4c1c5\" (UID: \"96a7b311-e7f0-4471-937c-bc6330f4c1c5\") " Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.704695 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rplzm\" (UniqueName: \"kubernetes.io/projected/4901d831-7d96-44f6-afde-ab64894754b5-kube-api-access-rplzm\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.704712 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgncz\" (UniqueName: \"kubernetes.io/projected/a4e367b5-55d2-4e64-9fda-e9b79fe2684f-kube-api-access-qgncz\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.704724 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6913a01-865b-47e6-bf86-fe0ecfc7ea42-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.704735 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77qtn\" (UniqueName: \"kubernetes.io/projected/a6913a01-865b-47e6-bf86-fe0ecfc7ea42-kube-api-access-77qtn\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.704746 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4e367b5-55d2-4e64-9fda-e9b79fe2684f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.704756 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4901d831-7d96-44f6-afde-ab64894754b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.705481 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96a7b311-e7f0-4471-937c-bc6330f4c1c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96a7b311-e7f0-4471-937c-bc6330f4c1c5" (UID: "96a7b311-e7f0-4471-937c-bc6330f4c1c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.710695 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96a7b311-e7f0-4471-937c-bc6330f4c1c5-kube-api-access-tml9v" (OuterVolumeSpecName: "kube-api-access-tml9v") pod "96a7b311-e7f0-4471-937c-bc6330f4c1c5" (UID: "96a7b311-e7f0-4471-937c-bc6330f4c1c5"). InnerVolumeSpecName "kube-api-access-tml9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.806672 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tml9v\" (UniqueName: \"kubernetes.io/projected/96a7b311-e7f0-4471-937c-bc6330f4c1c5-kube-api-access-tml9v\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:49 crc kubenswrapper[4832]: I0131 05:02:49.807052 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96a7b311-e7f0-4471-937c-bc6330f4c1c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.280547 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7db2-account-create-update-gjdrd" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.292176 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3153-account-create-update-xpnkf" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.433439 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be592430-58f0-43c6-b740-fd413081c5c4-operator-scripts\") pod \"be592430-58f0-43c6-b740-fd413081c5c4\" (UID: \"be592430-58f0-43c6-b740-fd413081c5c4\") " Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.434043 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be592430-58f0-43c6-b740-fd413081c5c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be592430-58f0-43c6-b740-fd413081c5c4" (UID: "be592430-58f0-43c6-b740-fd413081c5c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.434117 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d15a8ae3-e07a-4f6d-9364-a847ec46f620-operator-scripts\") pod \"d15a8ae3-e07a-4f6d-9364-a847ec46f620\" (UID: \"d15a8ae3-e07a-4f6d-9364-a847ec46f620\") " Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.434271 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqmq4\" (UniqueName: \"kubernetes.io/projected/d15a8ae3-e07a-4f6d-9364-a847ec46f620-kube-api-access-fqmq4\") pod \"d15a8ae3-e07a-4f6d-9364-a847ec46f620\" (UID: \"d15a8ae3-e07a-4f6d-9364-a847ec46f620\") " Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.434316 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rmts\" (UniqueName: \"kubernetes.io/projected/be592430-58f0-43c6-b740-fd413081c5c4-kube-api-access-8rmts\") pod \"be592430-58f0-43c6-b740-fd413081c5c4\" (UID: \"be592430-58f0-43c6-b740-fd413081c5c4\") " Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.434682 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15a8ae3-e07a-4f6d-9364-a847ec46f620-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d15a8ae3-e07a-4f6d-9364-a847ec46f620" (UID: "d15a8ae3-e07a-4f6d-9364-a847ec46f620"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.435105 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be592430-58f0-43c6-b740-fd413081c5c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.435152 4832 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d15a8ae3-e07a-4f6d-9364-a847ec46f620-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.442605 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d15a8ae3-e07a-4f6d-9364-a847ec46f620-kube-api-access-fqmq4" (OuterVolumeSpecName: "kube-api-access-fqmq4") pod "d15a8ae3-e07a-4f6d-9364-a847ec46f620" (UID: "d15a8ae3-e07a-4f6d-9364-a847ec46f620"). InnerVolumeSpecName "kube-api-access-fqmq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.457742 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be592430-58f0-43c6-b740-fd413081c5c4-kube-api-access-8rmts" (OuterVolumeSpecName: "kube-api-access-8rmts") pod "be592430-58f0-43c6-b740-fd413081c5c4" (UID: "be592430-58f0-43c6-b740-fd413081c5c4"). InnerVolumeSpecName "kube-api-access-8rmts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.536746 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqmq4\" (UniqueName: \"kubernetes.io/projected/d15a8ae3-e07a-4f6d-9364-a847ec46f620-kube-api-access-fqmq4\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.536791 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rmts\" (UniqueName: \"kubernetes.io/projected/be592430-58f0-43c6-b740-fd413081c5c4-kube-api-access-8rmts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.625976 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"800a85ea-e52f-4735-bebe-47974b9d7732","Type":"ContainerStarted","Data":"8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d"} Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.634917 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7db2-account-create-update-gjdrd" event={"ID":"d15a8ae3-e07a-4f6d-9364-a847ec46f620","Type":"ContainerDied","Data":"a57b5678f1af132e47f8259f593aa25200897602d312f95a4722ae8774f8c9d3"} Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.634967 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a57b5678f1af132e47f8259f593aa25200897602d312f95a4722ae8774f8c9d3" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.635059 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7db2-account-create-update-gjdrd" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.659427 4832 generic.go:334] "Generic (PLEG): container finished" podID="e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" containerID="0adb770fffda48e8d6e6a7859b642de3672c588b793b6c40c38872ab6c2059b7" exitCode=0 Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.660059 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c45f655bb-2jmz9" event={"ID":"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0","Type":"ContainerDied","Data":"0adb770fffda48e8d6e6a7859b642de3672c588b793b6c40c38872ab6c2059b7"} Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.667754 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3153-account-create-update-xpnkf" event={"ID":"be592430-58f0-43c6-b740-fd413081c5c4","Type":"ContainerDied","Data":"51e3eeac34e46d3acae739acefbc43ba1459e432c09a04ca165faa1e15e4b687"} Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.667799 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51e3eeac34e46d3acae739acefbc43ba1459e432c09a04ca165faa1e15e4b687" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.667869 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3153-account-create-update-xpnkf" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.845441 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.950168 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-ovndb-tls-certs\") pod \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.950456 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-combined-ca-bundle\") pod \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.950577 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57fv4\" (UniqueName: \"kubernetes.io/projected/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-kube-api-access-57fv4\") pod \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.950646 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-config\") pod \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.950682 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-httpd-config\") pod \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\" (UID: \"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0\") " Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.959836 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-kube-api-access-57fv4" (OuterVolumeSpecName: "kube-api-access-57fv4") pod "e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" (UID: "e3e9bf00-0cf3-4c1b-8114-979ca60b10a0"). InnerVolumeSpecName "kube-api-access-57fv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:50 crc kubenswrapper[4832]: I0131 05:02:50.962703 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" (UID: "e3e9bf00-0cf3-4c1b-8114-979ca60b10a0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.017770 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" (UID: "e3e9bf00-0cf3-4c1b-8114-979ca60b10a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.028230 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-config" (OuterVolumeSpecName: "config") pod "e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" (UID: "e3e9bf00-0cf3-4c1b-8114-979ca60b10a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.055429 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57fv4\" (UniqueName: \"kubernetes.io/projected/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-kube-api-access-57fv4\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.055459 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.055470 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.055479 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.074985 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" (UID: "e3e9bf00-0cf3-4c1b-8114-979ca60b10a0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.158386 4832 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.697026 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"800a85ea-e52f-4735-bebe-47974b9d7732","Type":"ContainerStarted","Data":"ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629"} Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.717801 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c45f655bb-2jmz9" event={"ID":"e3e9bf00-0cf3-4c1b-8114-979ca60b10a0","Type":"ContainerDied","Data":"2f781ace961fc09e1164f6a168e6a7ce8319d9b9748e1815251e3d800c8d6a68"} Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.717865 4832 scope.go:117] "RemoveContainer" containerID="f2b58cf48366cb8fa2e610a60a2e1dd027dc03184f8b9d6d9337d9b04dd2e448" Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.718040 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c45f655bb-2jmz9" Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.795324 4832 scope.go:117] "RemoveContainer" containerID="0adb770fffda48e8d6e6a7859b642de3672c588b793b6c40c38872ab6c2059b7" Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.831722 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c45f655bb-2jmz9"] Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.842469 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c45f655bb-2jmz9"] Jan 31 05:02:51 crc kubenswrapper[4832]: I0131 05:02:51.883680 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" path="/var/lib/kubelet/pods/e3e9bf00-0cf3-4c1b-8114-979ca60b10a0/volumes" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.186497 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.286916 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-logs\") pod \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.286989 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glvj4\" (UniqueName: \"kubernetes.io/projected/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-kube-api-access-glvj4\") pod \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.287142 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-horizon-secret-key\") pod \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.287246 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-horizon-tls-certs\") pod \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.287335 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-scripts\") pod \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.287361 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-config-data\") pod \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.287391 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-combined-ca-bundle\") pod \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\" (UID: \"02f959e1-19ff-4f88-927b-ef2d3ee6d87e\") " Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.288644 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-logs" (OuterVolumeSpecName: "logs") pod "02f959e1-19ff-4f88-927b-ef2d3ee6d87e" (UID: "02f959e1-19ff-4f88-927b-ef2d3ee6d87e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.295850 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "02f959e1-19ff-4f88-927b-ef2d3ee6d87e" (UID: "02f959e1-19ff-4f88-927b-ef2d3ee6d87e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.298631 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-kube-api-access-glvj4" (OuterVolumeSpecName: "kube-api-access-glvj4") pod "02f959e1-19ff-4f88-927b-ef2d3ee6d87e" (UID: "02f959e1-19ff-4f88-927b-ef2d3ee6d87e"). InnerVolumeSpecName "kube-api-access-glvj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.327708 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02f959e1-19ff-4f88-927b-ef2d3ee6d87e" (UID: "02f959e1-19ff-4f88-927b-ef2d3ee6d87e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.333727 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-scripts" (OuterVolumeSpecName: "scripts") pod "02f959e1-19ff-4f88-927b-ef2d3ee6d87e" (UID: "02f959e1-19ff-4f88-927b-ef2d3ee6d87e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.339371 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-config-data" (OuterVolumeSpecName: "config-data") pod "02f959e1-19ff-4f88-927b-ef2d3ee6d87e" (UID: "02f959e1-19ff-4f88-927b-ef2d3ee6d87e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.379004 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "02f959e1-19ff-4f88-927b-ef2d3ee6d87e" (UID: "02f959e1-19ff-4f88-927b-ef2d3ee6d87e"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.390901 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.390943 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.390957 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.390975 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.390984 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glvj4\" (UniqueName: \"kubernetes.io/projected/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-kube-api-access-glvj4\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.390994 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.391003 4832 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f959e1-19ff-4f88-927b-ef2d3ee6d87e-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.731408 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"800a85ea-e52f-4735-bebe-47974b9d7732","Type":"ContainerStarted","Data":"e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632"} Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.734090 4832 generic.go:334] "Generic (PLEG): container finished" podID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerID="4094aadeea02b4a16438623d6afa0de6059eba2de4f411c535f3de2e34dff690" exitCode=137 Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.734170 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fd59dbb48-vjkkx" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.734207 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fd59dbb48-vjkkx" event={"ID":"02f959e1-19ff-4f88-927b-ef2d3ee6d87e","Type":"ContainerDied","Data":"4094aadeea02b4a16438623d6afa0de6059eba2de4f411c535f3de2e34dff690"} Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.734280 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fd59dbb48-vjkkx" event={"ID":"02f959e1-19ff-4f88-927b-ef2d3ee6d87e","Type":"ContainerDied","Data":"5df374a55811feed9339ff12019f8e4b13eeaeb640d5e576b238b02daa2875fe"} Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.734304 4832 scope.go:117] "RemoveContainer" containerID="0cd477ddc7e9602ba61a00d26536bd6addd4e75804a0663968e33e48626e5a28" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.776691 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7fd59dbb48-vjkkx"] Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.788231 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7fd59dbb48-vjkkx"] Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.854264 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.856662 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6df44bf7d7-6dwfp" Jan 31 05:02:52 crc kubenswrapper[4832]: I0131 05:02:52.989801 4832 scope.go:117] "RemoveContainer" containerID="4094aadeea02b4a16438623d6afa0de6059eba2de4f411c535f3de2e34dff690" Jan 31 05:02:53 crc kubenswrapper[4832]: I0131 05:02:53.028718 4832 scope.go:117] "RemoveContainer" containerID="0cd477ddc7e9602ba61a00d26536bd6addd4e75804a0663968e33e48626e5a28" Jan 31 05:02:53 crc kubenswrapper[4832]: E0131 05:02:53.029460 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cd477ddc7e9602ba61a00d26536bd6addd4e75804a0663968e33e48626e5a28\": container with ID starting with 0cd477ddc7e9602ba61a00d26536bd6addd4e75804a0663968e33e48626e5a28 not found: ID does not exist" containerID="0cd477ddc7e9602ba61a00d26536bd6addd4e75804a0663968e33e48626e5a28" Jan 31 05:02:53 crc kubenswrapper[4832]: I0131 05:02:53.029526 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cd477ddc7e9602ba61a00d26536bd6addd4e75804a0663968e33e48626e5a28"} err="failed to get container status \"0cd477ddc7e9602ba61a00d26536bd6addd4e75804a0663968e33e48626e5a28\": rpc error: code = NotFound desc = could not find container \"0cd477ddc7e9602ba61a00d26536bd6addd4e75804a0663968e33e48626e5a28\": container with ID starting with 0cd477ddc7e9602ba61a00d26536bd6addd4e75804a0663968e33e48626e5a28 not found: ID does not exist" Jan 31 05:02:53 crc kubenswrapper[4832]: I0131 05:02:53.029584 4832 scope.go:117] "RemoveContainer" containerID="4094aadeea02b4a16438623d6afa0de6059eba2de4f411c535f3de2e34dff690" Jan 31 05:02:53 crc kubenswrapper[4832]: E0131 05:02:53.032124 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4094aadeea02b4a16438623d6afa0de6059eba2de4f411c535f3de2e34dff690\": container with ID starting with 4094aadeea02b4a16438623d6afa0de6059eba2de4f411c535f3de2e34dff690 not found: ID does not exist" containerID="4094aadeea02b4a16438623d6afa0de6059eba2de4f411c535f3de2e34dff690" Jan 31 05:02:53 crc kubenswrapper[4832]: I0131 05:02:53.032171 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4094aadeea02b4a16438623d6afa0de6059eba2de4f411c535f3de2e34dff690"} err="failed to get container status \"4094aadeea02b4a16438623d6afa0de6059eba2de4f411c535f3de2e34dff690\": rpc error: code = NotFound desc = could not find container \"4094aadeea02b4a16438623d6afa0de6059eba2de4f411c535f3de2e34dff690\": container with ID starting with 4094aadeea02b4a16438623d6afa0de6059eba2de4f411c535f3de2e34dff690 not found: ID does not exist" Jan 31 05:02:53 crc kubenswrapper[4832]: I0131 05:02:53.870844 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" path="/var/lib/kubelet/pods/02f959e1-19ff-4f88-927b-ef2d3ee6d87e/volumes" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.241277 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4jdb8"] Jan 31 05:02:54 crc kubenswrapper[4832]: E0131 05:02:54.242061 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4901d831-7d96-44f6-afde-ab64894754b5" containerName="mariadb-database-create" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.242146 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4901d831-7d96-44f6-afde-ab64894754b5" containerName="mariadb-database-create" Jan 31 05:02:54 crc kubenswrapper[4832]: E0131 05:02:54.242227 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15a8ae3-e07a-4f6d-9364-a847ec46f620" containerName="mariadb-account-create-update" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.242302 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15a8ae3-e07a-4f6d-9364-a847ec46f620" containerName="mariadb-account-create-update" Jan 31 05:02:54 crc kubenswrapper[4832]: E0131 05:02:54.242406 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerName="horizon" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.242513 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerName="horizon" Jan 31 05:02:54 crc kubenswrapper[4832]: E0131 05:02:54.242606 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" containerName="neutron-api" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.242697 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" containerName="neutron-api" Jan 31 05:02:54 crc kubenswrapper[4832]: E0131 05:02:54.242771 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e367b5-55d2-4e64-9fda-e9b79fe2684f" containerName="mariadb-account-create-update" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.242830 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e367b5-55d2-4e64-9fda-e9b79fe2684f" containerName="mariadb-account-create-update" Jan 31 05:02:54 crc kubenswrapper[4832]: E0131 05:02:54.242890 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerName="horizon-log" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.242946 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerName="horizon-log" Jan 31 05:02:54 crc kubenswrapper[4832]: E0131 05:02:54.243007 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" containerName="neutron-httpd" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.243067 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" containerName="neutron-httpd" Jan 31 05:02:54 crc kubenswrapper[4832]: E0131 05:02:54.243152 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a7b311-e7f0-4471-937c-bc6330f4c1c5" containerName="mariadb-database-create" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.243220 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a7b311-e7f0-4471-937c-bc6330f4c1c5" containerName="mariadb-database-create" Jan 31 05:02:54 crc kubenswrapper[4832]: E0131 05:02:54.243292 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be592430-58f0-43c6-b740-fd413081c5c4" containerName="mariadb-account-create-update" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.243367 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="be592430-58f0-43c6-b740-fd413081c5c4" containerName="mariadb-account-create-update" Jan 31 05:02:54 crc kubenswrapper[4832]: E0131 05:02:54.243455 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6913a01-865b-47e6-bf86-fe0ecfc7ea42" containerName="mariadb-database-create" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.243532 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6913a01-865b-47e6-bf86-fe0ecfc7ea42" containerName="mariadb-database-create" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.243868 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6913a01-865b-47e6-bf86-fe0ecfc7ea42" containerName="mariadb-database-create" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.243967 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e367b5-55d2-4e64-9fda-e9b79fe2684f" containerName="mariadb-account-create-update" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.244048 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerName="horizon-log" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.244128 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4901d831-7d96-44f6-afde-ab64894754b5" containerName="mariadb-database-create" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.244273 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="96a7b311-e7f0-4471-937c-bc6330f4c1c5" containerName="mariadb-database-create" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.244366 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f959e1-19ff-4f88-927b-ef2d3ee6d87e" containerName="horizon" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.244448 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="be592430-58f0-43c6-b740-fd413081c5c4" containerName="mariadb-account-create-update" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.244522 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" containerName="neutron-httpd" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.244627 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d15a8ae3-e07a-4f6d-9364-a847ec46f620" containerName="mariadb-account-create-update" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.244707 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3e9bf00-0cf3-4c1b-8114-979ca60b10a0" containerName="neutron-api" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.245676 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.247931 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-72mk8" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.250125 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.250964 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.286625 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4jdb8"] Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.318432 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.329447 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-548576cf8d-gz7f7" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.331965 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4jdb8\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.332079 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-config-data\") pod \"nova-cell0-conductor-db-sync-4jdb8\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.332117 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mswdv\" (UniqueName: \"kubernetes.io/projected/67ec4022-010e-4c03-8e2c-622261e37510-kube-api-access-mswdv\") pod \"nova-cell0-conductor-db-sync-4jdb8\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.332144 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-scripts\") pod \"nova-cell0-conductor-db-sync-4jdb8\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.437248 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-config-data\") pod \"nova-cell0-conductor-db-sync-4jdb8\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.437342 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mswdv\" (UniqueName: \"kubernetes.io/projected/67ec4022-010e-4c03-8e2c-622261e37510-kube-api-access-mswdv\") pod \"nova-cell0-conductor-db-sync-4jdb8\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.437371 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-scripts\") pod \"nova-cell0-conductor-db-sync-4jdb8\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.437444 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4jdb8\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.466447 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4jdb8\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.468269 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-scripts\") pod \"nova-cell0-conductor-db-sync-4jdb8\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.469417 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7459856588-428fk"] Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.478429 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7459856588-428fk" podUID="6eee292a-4bfc-4a13-9c27-4d381520e7e9" containerName="placement-log" containerID="cri-o://204f44d3e0b0ce04695dc29baa0901724195b2fe33957f5a8fb73bf893740d7a" gracePeriod=30 Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.476132 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-config-data\") pod \"nova-cell0-conductor-db-sync-4jdb8\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.479059 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7459856588-428fk" podUID="6eee292a-4bfc-4a13-9c27-4d381520e7e9" containerName="placement-api" containerID="cri-o://fe7b6ad6b226ece1c0b334931e5e1060ff4722f1f04ef225d248550a9dd0df4e" gracePeriod=30 Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.505329 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mswdv\" (UniqueName: \"kubernetes.io/projected/67ec4022-010e-4c03-8e2c-622261e37510-kube-api-access-mswdv\") pod \"nova-cell0-conductor-db-sync-4jdb8\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.677737 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.770145 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"800a85ea-e52f-4735-bebe-47974b9d7732","Type":"ContainerStarted","Data":"f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671"} Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.770327 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="ceilometer-central-agent" containerID="cri-o://8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d" gracePeriod=30 Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.770361 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.770392 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="proxy-httpd" containerID="cri-o://f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671" gracePeriod=30 Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.770405 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="sg-core" containerID="cri-o://e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632" gracePeriod=30 Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.770437 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="ceilometer-notification-agent" containerID="cri-o://ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629" gracePeriod=30 Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.794447 4832 generic.go:334] "Generic (PLEG): container finished" podID="6eee292a-4bfc-4a13-9c27-4d381520e7e9" containerID="204f44d3e0b0ce04695dc29baa0901724195b2fe33957f5a8fb73bf893740d7a" exitCode=143 Jan 31 05:02:54 crc kubenswrapper[4832]: I0131 05:02:54.795666 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7459856588-428fk" event={"ID":"6eee292a-4bfc-4a13-9c27-4d381520e7e9","Type":"ContainerDied","Data":"204f44d3e0b0ce04695dc29baa0901724195b2fe33957f5a8fb73bf893740d7a"} Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.216293 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.568015771 podStartE2EDuration="9.216269635s" podCreationTimestamp="2026-01-31 05:02:46 +0000 UTC" firstStartedPulling="2026-01-31 05:02:47.56019234 +0000 UTC m=+1176.509014025" lastFinishedPulling="2026-01-31 05:02:54.208446204 +0000 UTC m=+1183.157267889" observedRunningTime="2026-01-31 05:02:54.807485673 +0000 UTC m=+1183.756307358" watchObservedRunningTime="2026-01-31 05:02:55.216269635 +0000 UTC m=+1184.165091310" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.223670 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4jdb8"] Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.622163 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.664994 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-scripts\") pod \"800a85ea-e52f-4735-bebe-47974b9d7732\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.665113 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/800a85ea-e52f-4735-bebe-47974b9d7732-run-httpd\") pod \"800a85ea-e52f-4735-bebe-47974b9d7732\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.665220 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-sg-core-conf-yaml\") pod \"800a85ea-e52f-4735-bebe-47974b9d7732\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.665405 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-combined-ca-bundle\") pod \"800a85ea-e52f-4735-bebe-47974b9d7732\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.665464 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69zgm\" (UniqueName: \"kubernetes.io/projected/800a85ea-e52f-4735-bebe-47974b9d7732-kube-api-access-69zgm\") pod \"800a85ea-e52f-4735-bebe-47974b9d7732\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.666608 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-config-data\") pod \"800a85ea-e52f-4735-bebe-47974b9d7732\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.666446 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/800a85ea-e52f-4735-bebe-47974b9d7732-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "800a85ea-e52f-4735-bebe-47974b9d7732" (UID: "800a85ea-e52f-4735-bebe-47974b9d7732"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.666926 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/800a85ea-e52f-4735-bebe-47974b9d7732-log-httpd\") pod \"800a85ea-e52f-4735-bebe-47974b9d7732\" (UID: \"800a85ea-e52f-4735-bebe-47974b9d7732\") " Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.667230 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/800a85ea-e52f-4735-bebe-47974b9d7732-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "800a85ea-e52f-4735-bebe-47974b9d7732" (UID: "800a85ea-e52f-4735-bebe-47974b9d7732"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.668041 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/800a85ea-e52f-4735-bebe-47974b9d7732-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.671801 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/800a85ea-e52f-4735-bebe-47974b9d7732-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.682807 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-scripts" (OuterVolumeSpecName: "scripts") pod "800a85ea-e52f-4735-bebe-47974b9d7732" (UID: "800a85ea-e52f-4735-bebe-47974b9d7732"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.683816 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800a85ea-e52f-4735-bebe-47974b9d7732-kube-api-access-69zgm" (OuterVolumeSpecName: "kube-api-access-69zgm") pod "800a85ea-e52f-4735-bebe-47974b9d7732" (UID: "800a85ea-e52f-4735-bebe-47974b9d7732"). InnerVolumeSpecName "kube-api-access-69zgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.718016 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "800a85ea-e52f-4735-bebe-47974b9d7732" (UID: "800a85ea-e52f-4735-bebe-47974b9d7732"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.771797 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "800a85ea-e52f-4735-bebe-47974b9d7732" (UID: "800a85ea-e52f-4735-bebe-47974b9d7732"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.774600 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.774630 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.774643 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.774655 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69zgm\" (UniqueName: \"kubernetes.io/projected/800a85ea-e52f-4735-bebe-47974b9d7732-kube-api-access-69zgm\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.795639 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-config-data" (OuterVolumeSpecName: "config-data") pod "800a85ea-e52f-4735-bebe-47974b9d7732" (UID: "800a85ea-e52f-4735-bebe-47974b9d7732"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.808533 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4jdb8" event={"ID":"67ec4022-010e-4c03-8e2c-622261e37510","Type":"ContainerStarted","Data":"12ec5c133636a61c502ed4b643a9d8dc03d7929c1556abf297dbd983d0cd3129"} Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.811917 4832 generic.go:334] "Generic (PLEG): container finished" podID="800a85ea-e52f-4735-bebe-47974b9d7732" containerID="f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671" exitCode=0 Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.811958 4832 generic.go:334] "Generic (PLEG): container finished" podID="800a85ea-e52f-4735-bebe-47974b9d7732" containerID="e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632" exitCode=2 Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.811971 4832 generic.go:334] "Generic (PLEG): container finished" podID="800a85ea-e52f-4735-bebe-47974b9d7732" containerID="ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629" exitCode=0 Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.811988 4832 generic.go:334] "Generic (PLEG): container finished" podID="800a85ea-e52f-4735-bebe-47974b9d7732" containerID="8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d" exitCode=0 Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.811989 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"800a85ea-e52f-4735-bebe-47974b9d7732","Type":"ContainerDied","Data":"f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671"} Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.812094 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"800a85ea-e52f-4735-bebe-47974b9d7732","Type":"ContainerDied","Data":"e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632"} Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.812114 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"800a85ea-e52f-4735-bebe-47974b9d7732","Type":"ContainerDied","Data":"ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629"} Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.812131 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"800a85ea-e52f-4735-bebe-47974b9d7732","Type":"ContainerDied","Data":"8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d"} Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.812147 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"800a85ea-e52f-4735-bebe-47974b9d7732","Type":"ContainerDied","Data":"80e9e8dafa11b91c70835a50bdd72fd415553b5e393c870a7c4d192368287df4"} Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.812176 4832 scope.go:117] "RemoveContainer" containerID="f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.812493 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.850916 4832 scope.go:117] "RemoveContainer" containerID="e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.853698 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.876717 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/800a85ea-e52f-4735-bebe-47974b9d7732-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.883965 4832 scope.go:117] "RemoveContainer" containerID="ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.898078 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.924001 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:55 crc kubenswrapper[4832]: E0131 05:02:55.924553 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="sg-core" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.924584 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="sg-core" Jan 31 05:02:55 crc kubenswrapper[4832]: E0131 05:02:55.924618 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="ceilometer-central-agent" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.924627 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="ceilometer-central-agent" Jan 31 05:02:55 crc kubenswrapper[4832]: E0131 05:02:55.924638 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="proxy-httpd" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.924645 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="proxy-httpd" Jan 31 05:02:55 crc kubenswrapper[4832]: E0131 05:02:55.924662 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="ceilometer-notification-agent" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.924669 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="ceilometer-notification-agent" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.924911 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="proxy-httpd" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.924938 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="ceilometer-notification-agent" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.924961 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="sg-core" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.924974 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" containerName="ceilometer-central-agent" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.926903 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.930004 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.930209 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.938159 4832 scope.go:117] "RemoveContainer" containerID="8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.955582 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.968780 4832 scope.go:117] "RemoveContainer" containerID="f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671" Jan 31 05:02:55 crc kubenswrapper[4832]: E0131 05:02:55.969771 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671\": container with ID starting with f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671 not found: ID does not exist" containerID="f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.969835 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671"} err="failed to get container status \"f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671\": rpc error: code = NotFound desc = could not find container \"f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671\": container with ID starting with f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671 not found: ID does not exist" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.969882 4832 scope.go:117] "RemoveContainer" containerID="e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632" Jan 31 05:02:55 crc kubenswrapper[4832]: E0131 05:02:55.971798 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632\": container with ID starting with e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632 not found: ID does not exist" containerID="e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.971863 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632"} err="failed to get container status \"e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632\": rpc error: code = NotFound desc = could not find container \"e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632\": container with ID starting with e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632 not found: ID does not exist" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.971907 4832 scope.go:117] "RemoveContainer" containerID="ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629" Jan 31 05:02:55 crc kubenswrapper[4832]: E0131 05:02:55.973072 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629\": container with ID starting with ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629 not found: ID does not exist" containerID="ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.973111 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629"} err="failed to get container status \"ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629\": rpc error: code = NotFound desc = could not find container \"ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629\": container with ID starting with ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629 not found: ID does not exist" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.973141 4832 scope.go:117] "RemoveContainer" containerID="8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d" Jan 31 05:02:55 crc kubenswrapper[4832]: E0131 05:02:55.974859 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d\": container with ID starting with 8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d not found: ID does not exist" containerID="8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.974944 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d"} err="failed to get container status \"8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d\": rpc error: code = NotFound desc = could not find container \"8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d\": container with ID starting with 8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d not found: ID does not exist" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.974998 4832 scope.go:117] "RemoveContainer" containerID="f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.975381 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671"} err="failed to get container status \"f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671\": rpc error: code = NotFound desc = could not find container \"f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671\": container with ID starting with f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671 not found: ID does not exist" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.975408 4832 scope.go:117] "RemoveContainer" containerID="e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.975832 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632"} err="failed to get container status \"e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632\": rpc error: code = NotFound desc = could not find container \"e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632\": container with ID starting with e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632 not found: ID does not exist" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.975872 4832 scope.go:117] "RemoveContainer" containerID="ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.976237 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629"} err="failed to get container status \"ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629\": rpc error: code = NotFound desc = could not find container \"ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629\": container with ID starting with ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629 not found: ID does not exist" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.976469 4832 scope.go:117] "RemoveContainer" containerID="8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.977747 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d"} err="failed to get container status \"8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d\": rpc error: code = NotFound desc = could not find container \"8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d\": container with ID starting with 8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d not found: ID does not exist" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.977784 4832 scope.go:117] "RemoveContainer" containerID="f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.978247 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671"} err="failed to get container status \"f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671\": rpc error: code = NotFound desc = could not find container \"f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671\": container with ID starting with f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671 not found: ID does not exist" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.978299 4832 scope.go:117] "RemoveContainer" containerID="e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.978617 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.978707 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632"} err="failed to get container status \"e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632\": rpc error: code = NotFound desc = could not find container \"e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632\": container with ID starting with e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632 not found: ID does not exist" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.978754 4832 scope.go:117] "RemoveContainer" containerID="ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.978724 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-config-data\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.978911 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-scripts\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.979102 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63799d8a-67e4-44c3-ae3e-ad2272a25f80-run-httpd\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.979117 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629"} err="failed to get container status \"ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629\": rpc error: code = NotFound desc = could not find container \"ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629\": container with ID starting with ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629 not found: ID does not exist" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.979139 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkfwj\" (UniqueName: \"kubernetes.io/projected/63799d8a-67e4-44c3-ae3e-ad2272a25f80-kube-api-access-lkfwj\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.979151 4832 scope.go:117] "RemoveContainer" containerID="8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.979245 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63799d8a-67e4-44c3-ae3e-ad2272a25f80-log-httpd\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.979321 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.979460 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d"} err="failed to get container status \"8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d\": rpc error: code = NotFound desc = could not find container \"8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d\": container with ID starting with 8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d not found: ID does not exist" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.979491 4832 scope.go:117] "RemoveContainer" containerID="f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.979776 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671"} err="failed to get container status \"f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671\": rpc error: code = NotFound desc = could not find container \"f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671\": container with ID starting with f67ecb234399262f42b762c3a0c5c6e94186989c67cf8a7c510930d1298eb671 not found: ID does not exist" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.979804 4832 scope.go:117] "RemoveContainer" containerID="e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.980081 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632"} err="failed to get container status \"e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632\": rpc error: code = NotFound desc = could not find container \"e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632\": container with ID starting with e5e91479b74bd1e2ac5e6d7f98da97c9f7decf8d2f75a5e22fc1238c35009632 not found: ID does not exist" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.980188 4832 scope.go:117] "RemoveContainer" containerID="ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.980890 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629"} err="failed to get container status \"ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629\": rpc error: code = NotFound desc = could not find container \"ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629\": container with ID starting with ce9db463e7f2dcedb8df0800178d224e7abce337592de584756e1361c5d17629 not found: ID does not exist" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.980921 4832 scope.go:117] "RemoveContainer" containerID="8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d" Jan 31 05:02:55 crc kubenswrapper[4832]: I0131 05:02:55.981269 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d"} err="failed to get container status \"8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d\": rpc error: code = NotFound desc = could not find container \"8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d\": container with ID starting with 8e2b1aae5a4ec4d39e570e2a998ab51ffa5acca7659dd1bf9be9294ebd7dba5d not found: ID does not exist" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.090129 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63799d8a-67e4-44c3-ae3e-ad2272a25f80-log-httpd\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.090220 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.090948 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63799d8a-67e4-44c3-ae3e-ad2272a25f80-log-httpd\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.091161 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.093982 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-config-data\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.094085 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-scripts\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.094239 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63799d8a-67e4-44c3-ae3e-ad2272a25f80-run-httpd\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.094272 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkfwj\" (UniqueName: \"kubernetes.io/projected/63799d8a-67e4-44c3-ae3e-ad2272a25f80-kube-api-access-lkfwj\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.095086 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63799d8a-67e4-44c3-ae3e-ad2272a25f80-run-httpd\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.097307 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.098199 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-config-data\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.098552 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.100150 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-scripts\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.119379 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkfwj\" (UniqueName: \"kubernetes.io/projected/63799d8a-67e4-44c3-ae3e-ad2272a25f80-kube-api-access-lkfwj\") pod \"ceilometer-0\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " pod="openstack/ceilometer-0" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.259119 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:02:56 crc kubenswrapper[4832]: I0131 05:02:56.801593 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:02:57 crc kubenswrapper[4832]: I0131 05:02:57.858601 4832 generic.go:334] "Generic (PLEG): container finished" podID="6eee292a-4bfc-4a13-9c27-4d381520e7e9" containerID="fe7b6ad6b226ece1c0b334931e5e1060ff4722f1f04ef225d248550a9dd0df4e" exitCode=0 Jan 31 05:02:57 crc kubenswrapper[4832]: I0131 05:02:57.859247 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7459856588-428fk" event={"ID":"6eee292a-4bfc-4a13-9c27-4d381520e7e9","Type":"ContainerDied","Data":"fe7b6ad6b226ece1c0b334931e5e1060ff4722f1f04ef225d248550a9dd0df4e"} Jan 31 05:02:57 crc kubenswrapper[4832]: I0131 05:02:57.879825 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="800a85ea-e52f-4735-bebe-47974b9d7732" path="/var/lib/kubelet/pods/800a85ea-e52f-4735-bebe-47974b9d7732/volumes" Jan 31 05:02:57 crc kubenswrapper[4832]: I0131 05:02:57.880770 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63799d8a-67e4-44c3-ae3e-ad2272a25f80","Type":"ContainerStarted","Data":"28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527"} Jan 31 05:02:57 crc kubenswrapper[4832]: I0131 05:02:57.880806 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63799d8a-67e4-44c3-ae3e-ad2272a25f80","Type":"ContainerStarted","Data":"bc0a3762e38161591b8ec9cfadfda0d8ea20174771d1fc93a031d9feaaaec0aa"} Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.177603 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7459856588-428fk" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.269862 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eee292a-4bfc-4a13-9c27-4d381520e7e9-logs\") pod \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.270003 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-public-tls-certs\") pod \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.270037 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-combined-ca-bundle\") pod \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.270174 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-config-data\") pod \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.270203 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gvth\" (UniqueName: \"kubernetes.io/projected/6eee292a-4bfc-4a13-9c27-4d381520e7e9-kube-api-access-8gvth\") pod \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.270287 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-internal-tls-certs\") pod \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.270312 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-scripts\") pod \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\" (UID: \"6eee292a-4bfc-4a13-9c27-4d381520e7e9\") " Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.271279 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eee292a-4bfc-4a13-9c27-4d381520e7e9-logs" (OuterVolumeSpecName: "logs") pod "6eee292a-4bfc-4a13-9c27-4d381520e7e9" (UID: "6eee292a-4bfc-4a13-9c27-4d381520e7e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.276981 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-scripts" (OuterVolumeSpecName: "scripts") pod "6eee292a-4bfc-4a13-9c27-4d381520e7e9" (UID: "6eee292a-4bfc-4a13-9c27-4d381520e7e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.283539 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eee292a-4bfc-4a13-9c27-4d381520e7e9-kube-api-access-8gvth" (OuterVolumeSpecName: "kube-api-access-8gvth") pod "6eee292a-4bfc-4a13-9c27-4d381520e7e9" (UID: "6eee292a-4bfc-4a13-9c27-4d381520e7e9"). InnerVolumeSpecName "kube-api-access-8gvth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.337247 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-config-data" (OuterVolumeSpecName: "config-data") pod "6eee292a-4bfc-4a13-9c27-4d381520e7e9" (UID: "6eee292a-4bfc-4a13-9c27-4d381520e7e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.338881 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6eee292a-4bfc-4a13-9c27-4d381520e7e9" (UID: "6eee292a-4bfc-4a13-9c27-4d381520e7e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.372883 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.372932 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.372947 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gvth\" (UniqueName: \"kubernetes.io/projected/6eee292a-4bfc-4a13-9c27-4d381520e7e9-kube-api-access-8gvth\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.372963 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.372975 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eee292a-4bfc-4a13-9c27-4d381520e7e9-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.380668 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6eee292a-4bfc-4a13-9c27-4d381520e7e9" (UID: "6eee292a-4bfc-4a13-9c27-4d381520e7e9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.388711 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6eee292a-4bfc-4a13-9c27-4d381520e7e9" (UID: "6eee292a-4bfc-4a13-9c27-4d381520e7e9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.475186 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.475232 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eee292a-4bfc-4a13-9c27-4d381520e7e9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.881208 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7459856588-428fk" event={"ID":"6eee292a-4bfc-4a13-9c27-4d381520e7e9","Type":"ContainerDied","Data":"5c11468336b5fdbb0eeda2a1fd90a5f37dfc56c2fc8b648fc58f7af43943e63d"} Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.881766 4832 scope.go:117] "RemoveContainer" containerID="fe7b6ad6b226ece1c0b334931e5e1060ff4722f1f04ef225d248550a9dd0df4e" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.881540 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7459856588-428fk" Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.896773 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63799d8a-67e4-44c3-ae3e-ad2272a25f80","Type":"ContainerStarted","Data":"7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90"} Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.896837 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63799d8a-67e4-44c3-ae3e-ad2272a25f80","Type":"ContainerStarted","Data":"0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2"} Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.925223 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7459856588-428fk"] Jan 31 05:02:58 crc kubenswrapper[4832]: I0131 05:02:58.935074 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7459856588-428fk"] Jan 31 05:02:59 crc kubenswrapper[4832]: I0131 05:02:59.876952 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eee292a-4bfc-4a13-9c27-4d381520e7e9" path="/var/lib/kubelet/pods/6eee292a-4bfc-4a13-9c27-4d381520e7e9/volumes" Jan 31 05:03:01 crc kubenswrapper[4832]: I0131 05:03:01.603203 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 05:03:01 crc kubenswrapper[4832]: I0131 05:03:01.613927 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a3186400-0bbc-4891-929e-471a0c30b648" containerName="glance-log" containerID="cri-o://3c08766284e82325842569207ac353c8237960cae18f2fbab4c2bbdaf7c8426b" gracePeriod=30 Jan 31 05:03:01 crc kubenswrapper[4832]: I0131 05:03:01.614082 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a3186400-0bbc-4891-929e-471a0c30b648" containerName="glance-httpd" containerID="cri-o://589a20cd3e82be252115be21a675d4f8531512e35f5b3ad17c0d580ed7e07793" gracePeriod=30 Jan 31 05:03:01 crc kubenswrapper[4832]: E0131 05:03:01.765882 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3186400_0bbc_4891_929e_471a0c30b648.slice/crio-3c08766284e82325842569207ac353c8237960cae18f2fbab4c2bbdaf7c8426b.scope\": RecentStats: unable to find data in memory cache]" Jan 31 05:03:01 crc kubenswrapper[4832]: I0131 05:03:01.938326 4832 generic.go:334] "Generic (PLEG): container finished" podID="a3186400-0bbc-4891-929e-471a0c30b648" containerID="3c08766284e82325842569207ac353c8237960cae18f2fbab4c2bbdaf7c8426b" exitCode=143 Jan 31 05:03:01 crc kubenswrapper[4832]: I0131 05:03:01.938377 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a3186400-0bbc-4891-929e-471a0c30b648","Type":"ContainerDied","Data":"3c08766284e82325842569207ac353c8237960cae18f2fbab4c2bbdaf7c8426b"} Jan 31 05:03:02 crc kubenswrapper[4832]: I0131 05:03:02.484005 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 05:03:02 crc kubenswrapper[4832]: I0131 05:03:02.484329 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a5e58db6-3021-4917-8f51-be18dd5bb77e" containerName="glance-log" containerID="cri-o://76f998b1e804798c08d65ea44f5ba2aa8511eca9fb1b4b7092f30d05170b0879" gracePeriod=30 Jan 31 05:03:02 crc kubenswrapper[4832]: I0131 05:03:02.484600 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a5e58db6-3021-4917-8f51-be18dd5bb77e" containerName="glance-httpd" containerID="cri-o://62c0188a0762d2976aae6a9904c7e97f5e0b0e87e44bd61af2177f10558d45e6" gracePeriod=30 Jan 31 05:03:02 crc kubenswrapper[4832]: I0131 05:03:02.952513 4832 generic.go:334] "Generic (PLEG): container finished" podID="a5e58db6-3021-4917-8f51-be18dd5bb77e" containerID="76f998b1e804798c08d65ea44f5ba2aa8511eca9fb1b4b7092f30d05170b0879" exitCode=143 Jan 31 05:03:02 crc kubenswrapper[4832]: I0131 05:03:02.952808 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a5e58db6-3021-4917-8f51-be18dd5bb77e","Type":"ContainerDied","Data":"76f998b1e804798c08d65ea44f5ba2aa8511eca9fb1b4b7092f30d05170b0879"} Jan 31 05:03:03 crc kubenswrapper[4832]: I0131 05:03:03.326685 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:03 crc kubenswrapper[4832]: I0131 05:03:03.947849 4832 scope.go:117] "RemoveContainer" containerID="204f44d3e0b0ce04695dc29baa0901724195b2fe33957f5a8fb73bf893740d7a" Jan 31 05:03:04 crc kubenswrapper[4832]: I0131 05:03:04.985546 4832 generic.go:334] "Generic (PLEG): container finished" podID="a3186400-0bbc-4891-929e-471a0c30b648" containerID="589a20cd3e82be252115be21a675d4f8531512e35f5b3ad17c0d580ed7e07793" exitCode=0 Jan 31 05:03:04 crc kubenswrapper[4832]: I0131 05:03:04.985611 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a3186400-0bbc-4891-929e-471a0c30b648","Type":"ContainerDied","Data":"589a20cd3e82be252115be21a675d4f8531512e35f5b3ad17c0d580ed7e07793"} Jan 31 05:03:04 crc kubenswrapper[4832]: I0131 05:03:04.996821 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4jdb8" event={"ID":"67ec4022-010e-4c03-8e2c-622261e37510","Type":"ContainerStarted","Data":"a3dcc126a8db5958bac709afb63843f8f51f549b070356980519366b39791051"} Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.002020 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63799d8a-67e4-44c3-ae3e-ad2272a25f80","Type":"ContainerStarted","Data":"f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee"} Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.002167 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="ceilometer-central-agent" containerID="cri-o://28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527" gracePeriod=30 Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.002371 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.002416 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="proxy-httpd" containerID="cri-o://f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee" gracePeriod=30 Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.002457 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="sg-core" containerID="cri-o://7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90" gracePeriod=30 Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.002493 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="ceilometer-notification-agent" containerID="cri-o://0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2" gracePeriod=30 Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.026961 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-4jdb8" podStartSLOduration=1.730533176 podStartE2EDuration="11.026930861s" podCreationTimestamp="2026-01-31 05:02:54 +0000 UTC" firstStartedPulling="2026-01-31 05:02:55.249914361 +0000 UTC m=+1184.198736046" lastFinishedPulling="2026-01-31 05:03:04.546312036 +0000 UTC m=+1193.495133731" observedRunningTime="2026-01-31 05:03:05.017788318 +0000 UTC m=+1193.966610003" watchObservedRunningTime="2026-01-31 05:03:05.026930861 +0000 UTC m=+1193.975752546" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.078626 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.385048718 podStartE2EDuration="10.078602209s" podCreationTimestamp="2026-01-31 05:02:55 +0000 UTC" firstStartedPulling="2026-01-31 05:02:56.82011252 +0000 UTC m=+1185.768934205" lastFinishedPulling="2026-01-31 05:03:04.513666011 +0000 UTC m=+1193.462487696" observedRunningTime="2026-01-31 05:03:05.047985577 +0000 UTC m=+1193.996807282" watchObservedRunningTime="2026-01-31 05:03:05.078602209 +0000 UTC m=+1194.027423894" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.421651 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.570625 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a3186400-0bbc-4891-929e-471a0c30b648-httpd-run\") pod \"a3186400-0bbc-4891-929e-471a0c30b648\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.570731 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-public-tls-certs\") pod \"a3186400-0bbc-4891-929e-471a0c30b648\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.570878 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-combined-ca-bundle\") pod \"a3186400-0bbc-4891-929e-471a0c30b648\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.570946 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-config-data\") pod \"a3186400-0bbc-4891-929e-471a0c30b648\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.570999 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6qjf\" (UniqueName: \"kubernetes.io/projected/a3186400-0bbc-4891-929e-471a0c30b648-kube-api-access-r6qjf\") pod \"a3186400-0bbc-4891-929e-471a0c30b648\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.571145 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-scripts\") pod \"a3186400-0bbc-4891-929e-471a0c30b648\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.571190 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"a3186400-0bbc-4891-929e-471a0c30b648\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.571213 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3186400-0bbc-4891-929e-471a0c30b648-logs\") pod \"a3186400-0bbc-4891-929e-471a0c30b648\" (UID: \"a3186400-0bbc-4891-929e-471a0c30b648\") " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.571385 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3186400-0bbc-4891-929e-471a0c30b648-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a3186400-0bbc-4891-929e-471a0c30b648" (UID: "a3186400-0bbc-4891-929e-471a0c30b648"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.572174 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a3186400-0bbc-4891-929e-471a0c30b648-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.572785 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3186400-0bbc-4891-929e-471a0c30b648-logs" (OuterVolumeSpecName: "logs") pod "a3186400-0bbc-4891-929e-471a0c30b648" (UID: "a3186400-0bbc-4891-929e-471a0c30b648"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.582308 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "a3186400-0bbc-4891-929e-471a0c30b648" (UID: "a3186400-0bbc-4891-929e-471a0c30b648"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.582524 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3186400-0bbc-4891-929e-471a0c30b648-kube-api-access-r6qjf" (OuterVolumeSpecName: "kube-api-access-r6qjf") pod "a3186400-0bbc-4891-929e-471a0c30b648" (UID: "a3186400-0bbc-4891-929e-471a0c30b648"). InnerVolumeSpecName "kube-api-access-r6qjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.583268 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-scripts" (OuterVolumeSpecName: "scripts") pod "a3186400-0bbc-4891-929e-471a0c30b648" (UID: "a3186400-0bbc-4891-929e-471a0c30b648"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.634840 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-config-data" (OuterVolumeSpecName: "config-data") pod "a3186400-0bbc-4891-929e-471a0c30b648" (UID: "a3186400-0bbc-4891-929e-471a0c30b648"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.634904 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3186400-0bbc-4891-929e-471a0c30b648" (UID: "a3186400-0bbc-4891-929e-471a0c30b648"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.662408 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a3186400-0bbc-4891-929e-471a0c30b648" (UID: "a3186400-0bbc-4891-929e-471a0c30b648"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.674023 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.674076 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6qjf\" (UniqueName: \"kubernetes.io/projected/a3186400-0bbc-4891-929e-471a0c30b648-kube-api-access-r6qjf\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.674088 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.674098 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3186400-0bbc-4891-929e-471a0c30b648-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.674130 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.674140 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.674148 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3186400-0bbc-4891-929e-471a0c30b648-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.711305 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.775980 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.864118 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.978252 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-combined-ca-bundle\") pod \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.978384 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkfwj\" (UniqueName: \"kubernetes.io/projected/63799d8a-67e4-44c3-ae3e-ad2272a25f80-kube-api-access-lkfwj\") pod \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.978535 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63799d8a-67e4-44c3-ae3e-ad2272a25f80-log-httpd\") pod \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.978641 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-sg-core-conf-yaml\") pod \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.978674 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63799d8a-67e4-44c3-ae3e-ad2272a25f80-run-httpd\") pod \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.978733 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-scripts\") pod \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.978758 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-config-data\") pod \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\" (UID: \"63799d8a-67e4-44c3-ae3e-ad2272a25f80\") " Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.987928 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63799d8a-67e4-44c3-ae3e-ad2272a25f80-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "63799d8a-67e4-44c3-ae3e-ad2272a25f80" (UID: "63799d8a-67e4-44c3-ae3e-ad2272a25f80"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.988317 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63799d8a-67e4-44c3-ae3e-ad2272a25f80-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "63799d8a-67e4-44c3-ae3e-ad2272a25f80" (UID: "63799d8a-67e4-44c3-ae3e-ad2272a25f80"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.995707 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-scripts" (OuterVolumeSpecName: "scripts") pod "63799d8a-67e4-44c3-ae3e-ad2272a25f80" (UID: "63799d8a-67e4-44c3-ae3e-ad2272a25f80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:05 crc kubenswrapper[4832]: I0131 05:03:05.998429 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63799d8a-67e4-44c3-ae3e-ad2272a25f80-kube-api-access-lkfwj" (OuterVolumeSpecName: "kube-api-access-lkfwj") pod "63799d8a-67e4-44c3-ae3e-ad2272a25f80" (UID: "63799d8a-67e4-44c3-ae3e-ad2272a25f80"). InnerVolumeSpecName "kube-api-access-lkfwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.028620 4832 generic.go:334] "Generic (PLEG): container finished" podID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerID="f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee" exitCode=0 Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.028663 4832 generic.go:334] "Generic (PLEG): container finished" podID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerID="7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90" exitCode=2 Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.028671 4832 generic.go:334] "Generic (PLEG): container finished" podID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerID="0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2" exitCode=0 Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.028678 4832 generic.go:334] "Generic (PLEG): container finished" podID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerID="28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527" exitCode=0 Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.030518 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.030601 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63799d8a-67e4-44c3-ae3e-ad2272a25f80","Type":"ContainerDied","Data":"f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee"} Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.030651 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63799d8a-67e4-44c3-ae3e-ad2272a25f80","Type":"ContainerDied","Data":"7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90"} Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.030666 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63799d8a-67e4-44c3-ae3e-ad2272a25f80","Type":"ContainerDied","Data":"0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2"} Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.030678 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63799d8a-67e4-44c3-ae3e-ad2272a25f80","Type":"ContainerDied","Data":"28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527"} Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.030689 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63799d8a-67e4-44c3-ae3e-ad2272a25f80","Type":"ContainerDied","Data":"bc0a3762e38161591b8ec9cfadfda0d8ea20174771d1fc93a031d9feaaaec0aa"} Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.030717 4832 scope.go:117] "RemoveContainer" containerID="f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.043617 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "63799d8a-67e4-44c3-ae3e-ad2272a25f80" (UID: "63799d8a-67e4-44c3-ae3e-ad2272a25f80"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.044464 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a3186400-0bbc-4891-929e-471a0c30b648","Type":"ContainerDied","Data":"ae4038edf8693f21d7a51c52fd1edcead078ffeb845f9646ceeb35b8f9bdc92e"} Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.044647 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.060735 4832 generic.go:334] "Generic (PLEG): container finished" podID="a5e58db6-3021-4917-8f51-be18dd5bb77e" containerID="62c0188a0762d2976aae6a9904c7e97f5e0b0e87e44bd61af2177f10558d45e6" exitCode=0 Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.060833 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a5e58db6-3021-4917-8f51-be18dd5bb77e","Type":"ContainerDied","Data":"62c0188a0762d2976aae6a9904c7e97f5e0b0e87e44bd61af2177f10558d45e6"} Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.076000 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63799d8a-67e4-44c3-ae3e-ad2272a25f80" (UID: "63799d8a-67e4-44c3-ae3e-ad2272a25f80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.081382 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkfwj\" (UniqueName: \"kubernetes.io/projected/63799d8a-67e4-44c3-ae3e-ad2272a25f80-kube-api-access-lkfwj\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.081426 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63799d8a-67e4-44c3-ae3e-ad2272a25f80-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.081435 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.081444 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63799d8a-67e4-44c3-ae3e-ad2272a25f80-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.081454 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.081464 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.185075 4832 scope.go:117] "RemoveContainer" containerID="7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.211684 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.227515 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.235932 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-config-data" (OuterVolumeSpecName: "config-data") pod "63799d8a-67e4-44c3-ae3e-ad2272a25f80" (UID: "63799d8a-67e4-44c3-ae3e-ad2272a25f80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.243871 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 05:03:06 crc kubenswrapper[4832]: E0131 05:03:06.244405 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3186400-0bbc-4891-929e-471a0c30b648" containerName="glance-log" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244428 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3186400-0bbc-4891-929e-471a0c30b648" containerName="glance-log" Jan 31 05:03:06 crc kubenswrapper[4832]: E0131 05:03:06.244438 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3186400-0bbc-4891-929e-471a0c30b648" containerName="glance-httpd" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244445 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3186400-0bbc-4891-929e-471a0c30b648" containerName="glance-httpd" Jan 31 05:03:06 crc kubenswrapper[4832]: E0131 05:03:06.244460 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eee292a-4bfc-4a13-9c27-4d381520e7e9" containerName="placement-api" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244468 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eee292a-4bfc-4a13-9c27-4d381520e7e9" containerName="placement-api" Jan 31 05:03:06 crc kubenswrapper[4832]: E0131 05:03:06.244480 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="ceilometer-notification-agent" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244487 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="ceilometer-notification-agent" Jan 31 05:03:06 crc kubenswrapper[4832]: E0131 05:03:06.244517 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eee292a-4bfc-4a13-9c27-4d381520e7e9" containerName="placement-log" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244524 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eee292a-4bfc-4a13-9c27-4d381520e7e9" containerName="placement-log" Jan 31 05:03:06 crc kubenswrapper[4832]: E0131 05:03:06.244546 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="sg-core" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244552 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="sg-core" Jan 31 05:03:06 crc kubenswrapper[4832]: E0131 05:03:06.244588 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="proxy-httpd" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244597 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="proxy-httpd" Jan 31 05:03:06 crc kubenswrapper[4832]: E0131 05:03:06.244615 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="ceilometer-central-agent" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244624 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="ceilometer-central-agent" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244747 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244851 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eee292a-4bfc-4a13-9c27-4d381520e7e9" containerName="placement-api" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244871 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3186400-0bbc-4891-929e-471a0c30b648" containerName="glance-log" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244885 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="ceilometer-central-agent" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244902 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3186400-0bbc-4891-929e-471a0c30b648" containerName="glance-httpd" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244918 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="ceilometer-notification-agent" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244934 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="proxy-httpd" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244946 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eee292a-4bfc-4a13-9c27-4d381520e7e9" containerName="placement-log" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.244960 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" containerName="sg-core" Jan 31 05:03:06 crc kubenswrapper[4832]: E0131 05:03:06.245210 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e58db6-3021-4917-8f51-be18dd5bb77e" containerName="glance-httpd" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.245221 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e58db6-3021-4917-8f51-be18dd5bb77e" containerName="glance-httpd" Jan 31 05:03:06 crc kubenswrapper[4832]: E0131 05:03:06.245243 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e58db6-3021-4917-8f51-be18dd5bb77e" containerName="glance-log" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.245251 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e58db6-3021-4917-8f51-be18dd5bb77e" containerName="glance-log" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.246995 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5e58db6-3021-4917-8f51-be18dd5bb77e" containerName="glance-log" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.247040 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5e58db6-3021-4917-8f51-be18dd5bb77e" containerName="glance-httpd" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.247981 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.257054 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.258308 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.267936 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.279355 4832 scope.go:117] "RemoveContainer" containerID="0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.287957 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63799d8a-67e4-44c3-ae3e-ad2272a25f80-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.334502 4832 scope.go:117] "RemoveContainer" containerID="28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.389891 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5e58db6-3021-4917-8f51-be18dd5bb77e-httpd-run\") pod \"a5e58db6-3021-4917-8f51-be18dd5bb77e\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.389976 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-internal-tls-certs\") pod \"a5e58db6-3021-4917-8f51-be18dd5bb77e\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.390072 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-combined-ca-bundle\") pod \"a5e58db6-3021-4917-8f51-be18dd5bb77e\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.390171 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"a5e58db6-3021-4917-8f51-be18dd5bb77e\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.390466 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-947sc\" (UniqueName: \"kubernetes.io/projected/a5e58db6-3021-4917-8f51-be18dd5bb77e-kube-api-access-947sc\") pod \"a5e58db6-3021-4917-8f51-be18dd5bb77e\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.390743 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5e58db6-3021-4917-8f51-be18dd5bb77e-logs\") pod \"a5e58db6-3021-4917-8f51-be18dd5bb77e\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.390790 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-config-data\") pod \"a5e58db6-3021-4917-8f51-be18dd5bb77e\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.390824 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-scripts\") pod \"a5e58db6-3021-4917-8f51-be18dd5bb77e\" (UID: \"a5e58db6-3021-4917-8f51-be18dd5bb77e\") " Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.391342 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.391437 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-logs\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.391459 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.392618 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5e58db6-3021-4917-8f51-be18dd5bb77e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a5e58db6-3021-4917-8f51-be18dd5bb77e" (UID: "a5e58db6-3021-4917-8f51-be18dd5bb77e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.394116 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5e58db6-3021-4917-8f51-be18dd5bb77e-logs" (OuterVolumeSpecName: "logs") pod "a5e58db6-3021-4917-8f51-be18dd5bb77e" (UID: "a5e58db6-3021-4917-8f51-be18dd5bb77e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.394369 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.394445 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.394915 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qclxz\" (UniqueName: \"kubernetes.io/projected/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-kube-api-access-qclxz\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.395050 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.395216 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.395363 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5e58db6-3021-4917-8f51-be18dd5bb77e-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.395377 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5e58db6-3021-4917-8f51-be18dd5bb77e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.396992 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-scripts" (OuterVolumeSpecName: "scripts") pod "a5e58db6-3021-4917-8f51-be18dd5bb77e" (UID: "a5e58db6-3021-4917-8f51-be18dd5bb77e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.399616 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5e58db6-3021-4917-8f51-be18dd5bb77e-kube-api-access-947sc" (OuterVolumeSpecName: "kube-api-access-947sc") pod "a5e58db6-3021-4917-8f51-be18dd5bb77e" (UID: "a5e58db6-3021-4917-8f51-be18dd5bb77e"). InnerVolumeSpecName "kube-api-access-947sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.400981 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.403420 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "a5e58db6-3021-4917-8f51-be18dd5bb77e" (UID: "a5e58db6-3021-4917-8f51-be18dd5bb77e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.423100 4832 scope.go:117] "RemoveContainer" containerID="f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee" Jan 31 05:03:06 crc kubenswrapper[4832]: E0131 05:03:06.429045 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee\": container with ID starting with f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee not found: ID does not exist" containerID="f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.429096 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee"} err="failed to get container status \"f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee\": rpc error: code = NotFound desc = could not find container \"f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee\": container with ID starting with f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.429126 4832 scope.go:117] "RemoveContainer" containerID="7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90" Jan 31 05:03:06 crc kubenswrapper[4832]: E0131 05:03:06.433032 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90\": container with ID starting with 7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90 not found: ID does not exist" containerID="7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.433095 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90"} err="failed to get container status \"7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90\": rpc error: code = NotFound desc = could not find container \"7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90\": container with ID starting with 7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90 not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.433131 4832 scope.go:117] "RemoveContainer" containerID="0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2" Jan 31 05:03:06 crc kubenswrapper[4832]: E0131 05:03:06.433545 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2\": container with ID starting with 0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2 not found: ID does not exist" containerID="0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.433603 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2"} err="failed to get container status \"0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2\": rpc error: code = NotFound desc = could not find container \"0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2\": container with ID starting with 0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2 not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.433632 4832 scope.go:117] "RemoveContainer" containerID="28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527" Jan 31 05:03:06 crc kubenswrapper[4832]: E0131 05:03:06.433941 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527\": container with ID starting with 28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527 not found: ID does not exist" containerID="28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.433959 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527"} err="failed to get container status \"28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527\": rpc error: code = NotFound desc = could not find container \"28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527\": container with ID starting with 28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527 not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.433973 4832 scope.go:117] "RemoveContainer" containerID="f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.434337 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee"} err="failed to get container status \"f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee\": rpc error: code = NotFound desc = could not find container \"f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee\": container with ID starting with f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.434372 4832 scope.go:117] "RemoveContainer" containerID="7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.434599 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90"} err="failed to get container status \"7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90\": rpc error: code = NotFound desc = could not find container \"7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90\": container with ID starting with 7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90 not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.434619 4832 scope.go:117] "RemoveContainer" containerID="0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.434816 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2"} err="failed to get container status \"0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2\": rpc error: code = NotFound desc = could not find container \"0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2\": container with ID starting with 0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2 not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.434831 4832 scope.go:117] "RemoveContainer" containerID="28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.435056 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527"} err="failed to get container status \"28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527\": rpc error: code = NotFound desc = could not find container \"28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527\": container with ID starting with 28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527 not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.435078 4832 scope.go:117] "RemoveContainer" containerID="f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.435443 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee"} err="failed to get container status \"f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee\": rpc error: code = NotFound desc = could not find container \"f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee\": container with ID starting with f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.435458 4832 scope.go:117] "RemoveContainer" containerID="7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.435714 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90"} err="failed to get container status \"7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90\": rpc error: code = NotFound desc = could not find container \"7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90\": container with ID starting with 7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90 not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.435744 4832 scope.go:117] "RemoveContainer" containerID="0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.435948 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2"} err="failed to get container status \"0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2\": rpc error: code = NotFound desc = could not find container \"0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2\": container with ID starting with 0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2 not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.435975 4832 scope.go:117] "RemoveContainer" containerID="28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.436166 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527"} err="failed to get container status \"28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527\": rpc error: code = NotFound desc = could not find container \"28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527\": container with ID starting with 28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527 not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.436188 4832 scope.go:117] "RemoveContainer" containerID="f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.436346 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee"} err="failed to get container status \"f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee\": rpc error: code = NotFound desc = could not find container \"f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee\": container with ID starting with f13ae549a2345ffb6c133496a06320b18c1a8de5c1ef1b41b5445cccf36ae0ee not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.436366 4832 scope.go:117] "RemoveContainer" containerID="7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.436680 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90"} err="failed to get container status \"7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90\": rpc error: code = NotFound desc = could not find container \"7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90\": container with ID starting with 7db8a43927f036370f6e7fecac5b87b742e4f35283b4efb35d4ce2c5cccdaa90 not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.436701 4832 scope.go:117] "RemoveContainer" containerID="0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.436895 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2"} err="failed to get container status \"0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2\": rpc error: code = NotFound desc = could not find container \"0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2\": container with ID starting with 0d03fa8ad29d30a0299e232e7fd5affb4fd6ab3a8a19100b0a8f671c74d09fe2 not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.436920 4832 scope.go:117] "RemoveContainer" containerID="28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.437088 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527"} err="failed to get container status \"28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527\": rpc error: code = NotFound desc = could not find container \"28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527\": container with ID starting with 28ffbed9691eae10c5f553b78c0ad821e9d860ed7e76b974a75d8f9bc35a3527 not found: ID does not exist" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.437109 4832 scope.go:117] "RemoveContainer" containerID="589a20cd3e82be252115be21a675d4f8531512e35f5b3ad17c0d580ed7e07793" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.445249 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.464916 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5e58db6-3021-4917-8f51-be18dd5bb77e" (UID: "a5e58db6-3021-4917-8f51-be18dd5bb77e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.469640 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.472092 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.476668 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.480059 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.480178 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.485351 4832 scope.go:117] "RemoveContainer" containerID="3c08766284e82325842569207ac353c8237960cae18f2fbab4c2bbdaf7c8426b" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.488047 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a5e58db6-3021-4917-8f51-be18dd5bb77e" (UID: "a5e58db6-3021-4917-8f51-be18dd5bb77e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.493310 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-config-data" (OuterVolumeSpecName: "config-data") pod "a5e58db6-3021-4917-8f51-be18dd5bb77e" (UID: "a5e58db6-3021-4917-8f51-be18dd5bb77e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.500133 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.500385 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.500478 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-logs\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.500764 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.500883 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.501008 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.501193 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qclxz\" (UniqueName: \"kubernetes.io/projected/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-kube-api-access-qclxz\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.501354 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.502472 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-947sc\" (UniqueName: \"kubernetes.io/projected/a5e58db6-3021-4917-8f51-be18dd5bb77e-kube-api-access-947sc\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.502589 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.502670 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.502744 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.502835 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5e58db6-3021-4917-8f51-be18dd5bb77e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.502936 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.500536 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.501858 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.506347 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.507083 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.507582 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.510899 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-logs\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.510969 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.529907 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.532054 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qclxz\" (UniqueName: \"kubernetes.io/projected/ace9e44a-55e5-48ae-9e2e-533ab30a5cd8-kube-api-access-qclxz\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.552819 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8\") " pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.570263 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.604602 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-config-data\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.604702 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0a95e0b-6927-4aa4-9b07-994854d26bd0-run-httpd\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.604729 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0a95e0b-6927-4aa4-9b07-994854d26bd0-log-httpd\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.604749 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mswtx\" (UniqueName: \"kubernetes.io/projected/e0a95e0b-6927-4aa4-9b07-994854d26bd0-kube-api-access-mswtx\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.604771 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.604797 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-scripts\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.604923 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.604981 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.706856 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.707350 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-scripts\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.707447 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.707513 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-config-data\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.707618 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0a95e0b-6927-4aa4-9b07-994854d26bd0-run-httpd\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.707661 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0a95e0b-6927-4aa4-9b07-994854d26bd0-log-httpd\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.707699 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mswtx\" (UniqueName: \"kubernetes.io/projected/e0a95e0b-6927-4aa4-9b07-994854d26bd0-kube-api-access-mswtx\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.708785 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0a95e0b-6927-4aa4-9b07-994854d26bd0-run-httpd\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.708844 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0a95e0b-6927-4aa4-9b07-994854d26bd0-log-httpd\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.713148 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-config-data\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.713784 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.714170 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.719804 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-scripts\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.730418 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mswtx\" (UniqueName: \"kubernetes.io/projected/e0a95e0b-6927-4aa4-9b07-994854d26bd0-kube-api-access-mswtx\") pod \"ceilometer-0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " pod="openstack/ceilometer-0" Jan 31 05:03:06 crc kubenswrapper[4832]: I0131 05:03:06.821410 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.076313 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a5e58db6-3021-4917-8f51-be18dd5bb77e","Type":"ContainerDied","Data":"eea85042b0a48b43acbdc30e9d59651e1560f2e997539968e180981b68ab7d5b"} Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.076778 4832 scope.go:117] "RemoveContainer" containerID="62c0188a0762d2976aae6a9904c7e97f5e0b0e87e44bd61af2177f10558d45e6" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.077031 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.132604 4832 scope.go:117] "RemoveContainer" containerID="76f998b1e804798c08d65ea44f5ba2aa8511eca9fb1b4b7092f30d05170b0879" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.141107 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.168153 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.189749 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.199763 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.201642 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.210147 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.229947 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.230105 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.337325 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0720e9f6-21f1-43e9-b075-a35d548f4af9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.337441 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqxqf\" (UniqueName: \"kubernetes.io/projected/0720e9f6-21f1-43e9-b075-a35d548f4af9-kube-api-access-dqxqf\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.337477 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0720e9f6-21f1-43e9-b075-a35d548f4af9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.337496 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0720e9f6-21f1-43e9-b075-a35d548f4af9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.337526 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.338018 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0720e9f6-21f1-43e9-b075-a35d548f4af9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.338074 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0720e9f6-21f1-43e9-b075-a35d548f4af9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.338362 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0720e9f6-21f1-43e9-b075-a35d548f4af9-logs\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.344883 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:07 crc kubenswrapper[4832]: W0131 05:03:07.364400 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0a95e0b_6927_4aa4_9b07_994854d26bd0.slice/crio-dc8373145fcfe8d915a33ae94f3db9c428a1da6b437b93fc5f63e9d1893b8753 WatchSource:0}: Error finding container dc8373145fcfe8d915a33ae94f3db9c428a1da6b437b93fc5f63e9d1893b8753: Status 404 returned error can't find the container with id dc8373145fcfe8d915a33ae94f3db9c428a1da6b437b93fc5f63e9d1893b8753 Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.439731 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0720e9f6-21f1-43e9-b075-a35d548f4af9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.439784 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0720e9f6-21f1-43e9-b075-a35d548f4af9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.439826 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.439889 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0720e9f6-21f1-43e9-b075-a35d548f4af9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.439918 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0720e9f6-21f1-43e9-b075-a35d548f4af9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.439964 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0720e9f6-21f1-43e9-b075-a35d548f4af9-logs\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.439999 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0720e9f6-21f1-43e9-b075-a35d548f4af9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.440081 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqxqf\" (UniqueName: \"kubernetes.io/projected/0720e9f6-21f1-43e9-b075-a35d548f4af9-kube-api-access-dqxqf\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.440196 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.440508 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0720e9f6-21f1-43e9-b075-a35d548f4af9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.440967 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0720e9f6-21f1-43e9-b075-a35d548f4af9-logs\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.447233 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0720e9f6-21f1-43e9-b075-a35d548f4af9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.448018 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0720e9f6-21f1-43e9-b075-a35d548f4af9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.449476 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0720e9f6-21f1-43e9-b075-a35d548f4af9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.450040 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0720e9f6-21f1-43e9-b075-a35d548f4af9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.460908 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqxqf\" (UniqueName: \"kubernetes.io/projected/0720e9f6-21f1-43e9-b075-a35d548f4af9-kube-api-access-dqxqf\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.470085 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"0720e9f6-21f1-43e9-b075-a35d548f4af9\") " pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.564637 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.882670 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63799d8a-67e4-44c3-ae3e-ad2272a25f80" path="/var/lib/kubelet/pods/63799d8a-67e4-44c3-ae3e-ad2272a25f80/volumes" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.884429 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3186400-0bbc-4891-929e-471a0c30b648" path="/var/lib/kubelet/pods/a3186400-0bbc-4891-929e-471a0c30b648/volumes" Jan 31 05:03:07 crc kubenswrapper[4832]: I0131 05:03:07.885820 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5e58db6-3021-4917-8f51-be18dd5bb77e" path="/var/lib/kubelet/pods/a5e58db6-3021-4917-8f51-be18dd5bb77e/volumes" Jan 31 05:03:08 crc kubenswrapper[4832]: I0131 05:03:08.108915 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8","Type":"ContainerStarted","Data":"9aedfeb7a4e08fbe12fb7034358237e12469200fe9145514daa3f68c7b22b8be"} Jan 31 05:03:08 crc kubenswrapper[4832]: I0131 05:03:08.108966 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8","Type":"ContainerStarted","Data":"574dc8300c20f700c79379d2bda1b4b091c390a68a6d9d5462800a50c4dc8aae"} Jan 31 05:03:08 crc kubenswrapper[4832]: I0131 05:03:08.111197 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0a95e0b-6927-4aa4-9b07-994854d26bd0","Type":"ContainerStarted","Data":"dc8373145fcfe8d915a33ae94f3db9c428a1da6b437b93fc5f63e9d1893b8753"} Jan 31 05:03:08 crc kubenswrapper[4832]: I0131 05:03:08.164861 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 05:03:09 crc kubenswrapper[4832]: I0131 05:03:09.247838 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0a95e0b-6927-4aa4-9b07-994854d26bd0","Type":"ContainerStarted","Data":"61d67540eccb8f2173b4972b8dd2ba753c322ecc9d31265f3dfed86b89e5ff6c"} Jan 31 05:03:09 crc kubenswrapper[4832]: I0131 05:03:09.284827 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0720e9f6-21f1-43e9-b075-a35d548f4af9","Type":"ContainerStarted","Data":"6e28c45f4b0e5dfa30c68fe7bff94ec4c26580ad22e4ae11367a6edc06311e4c"} Jan 31 05:03:09 crc kubenswrapper[4832]: I0131 05:03:09.284892 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0720e9f6-21f1-43e9-b075-a35d548f4af9","Type":"ContainerStarted","Data":"3b71a53c0f066c9e40ffd1b7a32f2ab530f9459a1f179643807aacaad07debe2"} Jan 31 05:03:09 crc kubenswrapper[4832]: I0131 05:03:09.323874 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ace9e44a-55e5-48ae-9e2e-533ab30a5cd8","Type":"ContainerStarted","Data":"04307b99861f4c1e8b69b68da8586e6860752e9d76ae5d3ba4c8cbb52f561540"} Jan 31 05:03:10 crc kubenswrapper[4832]: I0131 05:03:10.340616 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0a95e0b-6927-4aa4-9b07-994854d26bd0","Type":"ContainerStarted","Data":"95144bbe95ddbc673b2af2655197e280e22cf0db05f9a9ce491403fa8e7e3d59"} Jan 31 05:03:10 crc kubenswrapper[4832]: I0131 05:03:10.341511 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0a95e0b-6927-4aa4-9b07-994854d26bd0","Type":"ContainerStarted","Data":"8556dc57a770863308c28f760a180013457b3f76963be85392a3bd26a6191ffd"} Jan 31 05:03:10 crc kubenswrapper[4832]: I0131 05:03:10.343664 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0720e9f6-21f1-43e9-b075-a35d548f4af9","Type":"ContainerStarted","Data":"d122fc4ee41879fe7c428db31a9ec37c7c7dd5b107e1f8261b8af1bb2675c608"} Jan 31 05:03:10 crc kubenswrapper[4832]: I0131 05:03:10.381402 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.381382182 podStartE2EDuration="4.381382182s" podCreationTimestamp="2026-01-31 05:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:03:09.378617368 +0000 UTC m=+1198.327439063" watchObservedRunningTime="2026-01-31 05:03:10.381382182 +0000 UTC m=+1199.330203867" Jan 31 05:03:10 crc kubenswrapper[4832]: I0131 05:03:10.389315 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.389293208 podStartE2EDuration="3.389293208s" podCreationTimestamp="2026-01-31 05:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:03:10.378215303 +0000 UTC m=+1199.327036988" watchObservedRunningTime="2026-01-31 05:03:10.389293208 +0000 UTC m=+1199.338114893" Jan 31 05:03:13 crc kubenswrapper[4832]: I0131 05:03:13.396337 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0a95e0b-6927-4aa4-9b07-994854d26bd0","Type":"ContainerStarted","Data":"10877bb8c63d6b16e64f645805d5dfb367fa16d57da25f95fa26650520a2139e"} Jan 31 05:03:13 crc kubenswrapper[4832]: I0131 05:03:13.397416 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 05:03:13 crc kubenswrapper[4832]: I0131 05:03:13.432800 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.678042413 podStartE2EDuration="7.432740621s" podCreationTimestamp="2026-01-31 05:03:06 +0000 UTC" firstStartedPulling="2026-01-31 05:03:07.369878122 +0000 UTC m=+1196.318699807" lastFinishedPulling="2026-01-31 05:03:12.12457633 +0000 UTC m=+1201.073398015" observedRunningTime="2026-01-31 05:03:13.418354524 +0000 UTC m=+1202.367176229" watchObservedRunningTime="2026-01-31 05:03:13.432740621 +0000 UTC m=+1202.381562306" Jan 31 05:03:14 crc kubenswrapper[4832]: I0131 05:03:14.073341 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:15 crc kubenswrapper[4832]: I0131 05:03:15.426800 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="ceilometer-central-agent" containerID="cri-o://61d67540eccb8f2173b4972b8dd2ba753c322ecc9d31265f3dfed86b89e5ff6c" gracePeriod=30 Jan 31 05:03:15 crc kubenswrapper[4832]: I0131 05:03:15.427316 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="proxy-httpd" containerID="cri-o://10877bb8c63d6b16e64f645805d5dfb367fa16d57da25f95fa26650520a2139e" gracePeriod=30 Jan 31 05:03:15 crc kubenswrapper[4832]: I0131 05:03:15.427367 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="sg-core" containerID="cri-o://95144bbe95ddbc673b2af2655197e280e22cf0db05f9a9ce491403fa8e7e3d59" gracePeriod=30 Jan 31 05:03:15 crc kubenswrapper[4832]: I0131 05:03:15.427410 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="ceilometer-notification-agent" containerID="cri-o://8556dc57a770863308c28f760a180013457b3f76963be85392a3bd26a6191ffd" gracePeriod=30 Jan 31 05:03:16 crc kubenswrapper[4832]: I0131 05:03:16.437803 4832 generic.go:334] "Generic (PLEG): container finished" podID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerID="10877bb8c63d6b16e64f645805d5dfb367fa16d57da25f95fa26650520a2139e" exitCode=0 Jan 31 05:03:16 crc kubenswrapper[4832]: I0131 05:03:16.438958 4832 generic.go:334] "Generic (PLEG): container finished" podID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerID="95144bbe95ddbc673b2af2655197e280e22cf0db05f9a9ce491403fa8e7e3d59" exitCode=2 Jan 31 05:03:16 crc kubenswrapper[4832]: I0131 05:03:16.439032 4832 generic.go:334] "Generic (PLEG): container finished" podID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerID="8556dc57a770863308c28f760a180013457b3f76963be85392a3bd26a6191ffd" exitCode=0 Jan 31 05:03:16 crc kubenswrapper[4832]: I0131 05:03:16.438022 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0a95e0b-6927-4aa4-9b07-994854d26bd0","Type":"ContainerDied","Data":"10877bb8c63d6b16e64f645805d5dfb367fa16d57da25f95fa26650520a2139e"} Jan 31 05:03:16 crc kubenswrapper[4832]: I0131 05:03:16.439187 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0a95e0b-6927-4aa4-9b07-994854d26bd0","Type":"ContainerDied","Data":"95144bbe95ddbc673b2af2655197e280e22cf0db05f9a9ce491403fa8e7e3d59"} Jan 31 05:03:16 crc kubenswrapper[4832]: I0131 05:03:16.439524 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0a95e0b-6927-4aa4-9b07-994854d26bd0","Type":"ContainerDied","Data":"8556dc57a770863308c28f760a180013457b3f76963be85392a3bd26a6191ffd"} Jan 31 05:03:16 crc kubenswrapper[4832]: I0131 05:03:16.571267 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 05:03:16 crc kubenswrapper[4832]: I0131 05:03:16.571640 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 05:03:16 crc kubenswrapper[4832]: I0131 05:03:16.624081 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 05:03:16 crc kubenswrapper[4832]: I0131 05:03:16.634506 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 05:03:17 crc kubenswrapper[4832]: I0131 05:03:17.453457 4832 generic.go:334] "Generic (PLEG): container finished" podID="67ec4022-010e-4c03-8e2c-622261e37510" containerID="a3dcc126a8db5958bac709afb63843f8f51f549b070356980519366b39791051" exitCode=0 Jan 31 05:03:17 crc kubenswrapper[4832]: I0131 05:03:17.453665 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4jdb8" event={"ID":"67ec4022-010e-4c03-8e2c-622261e37510","Type":"ContainerDied","Data":"a3dcc126a8db5958bac709afb63843f8f51f549b070356980519366b39791051"} Jan 31 05:03:17 crc kubenswrapper[4832]: I0131 05:03:17.454443 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 05:03:17 crc kubenswrapper[4832]: I0131 05:03:17.454510 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 05:03:17 crc kubenswrapper[4832]: I0131 05:03:17.566084 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 05:03:17 crc kubenswrapper[4832]: I0131 05:03:17.566169 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 05:03:17 crc kubenswrapper[4832]: I0131 05:03:17.607164 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 05:03:17 crc kubenswrapper[4832]: I0131 05:03:17.624475 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 05:03:18 crc kubenswrapper[4832]: I0131 05:03:18.465609 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 05:03:18 crc kubenswrapper[4832]: I0131 05:03:18.466234 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 05:03:18 crc kubenswrapper[4832]: I0131 05:03:18.884086 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.023071 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-scripts\") pod \"67ec4022-010e-4c03-8e2c-622261e37510\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.023182 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-combined-ca-bundle\") pod \"67ec4022-010e-4c03-8e2c-622261e37510\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.023222 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mswdv\" (UniqueName: \"kubernetes.io/projected/67ec4022-010e-4c03-8e2c-622261e37510-kube-api-access-mswdv\") pod \"67ec4022-010e-4c03-8e2c-622261e37510\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.023294 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-config-data\") pod \"67ec4022-010e-4c03-8e2c-622261e37510\" (UID: \"67ec4022-010e-4c03-8e2c-622261e37510\") " Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.031075 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-scripts" (OuterVolumeSpecName: "scripts") pod "67ec4022-010e-4c03-8e2c-622261e37510" (UID: "67ec4022-010e-4c03-8e2c-622261e37510"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.032197 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ec4022-010e-4c03-8e2c-622261e37510-kube-api-access-mswdv" (OuterVolumeSpecName: "kube-api-access-mswdv") pod "67ec4022-010e-4c03-8e2c-622261e37510" (UID: "67ec4022-010e-4c03-8e2c-622261e37510"). InnerVolumeSpecName "kube-api-access-mswdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.060176 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-config-data" (OuterVolumeSpecName: "config-data") pod "67ec4022-010e-4c03-8e2c-622261e37510" (UID: "67ec4022-010e-4c03-8e2c-622261e37510"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.103911 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67ec4022-010e-4c03-8e2c-622261e37510" (UID: "67ec4022-010e-4c03-8e2c-622261e37510"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.125486 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.125523 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.125532 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mswdv\" (UniqueName: \"kubernetes.io/projected/67ec4022-010e-4c03-8e2c-622261e37510-kube-api-access-mswdv\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.125542 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ec4022-010e-4c03-8e2c-622261e37510-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.198491 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.329719 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-scripts\") pod \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.329773 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mswtx\" (UniqueName: \"kubernetes.io/projected/e0a95e0b-6927-4aa4-9b07-994854d26bd0-kube-api-access-mswtx\") pod \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.329803 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-config-data\") pod \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.329871 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-combined-ca-bundle\") pod \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.329916 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0a95e0b-6927-4aa4-9b07-994854d26bd0-log-httpd\") pod \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.330000 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0a95e0b-6927-4aa4-9b07-994854d26bd0-run-httpd\") pod \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.330022 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-sg-core-conf-yaml\") pod \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\" (UID: \"e0a95e0b-6927-4aa4-9b07-994854d26bd0\") " Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.331546 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0a95e0b-6927-4aa4-9b07-994854d26bd0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e0a95e0b-6927-4aa4-9b07-994854d26bd0" (UID: "e0a95e0b-6927-4aa4-9b07-994854d26bd0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.332187 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0a95e0b-6927-4aa4-9b07-994854d26bd0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e0a95e0b-6927-4aa4-9b07-994854d26bd0" (UID: "e0a95e0b-6927-4aa4-9b07-994854d26bd0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.343792 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-scripts" (OuterVolumeSpecName: "scripts") pod "e0a95e0b-6927-4aa4-9b07-994854d26bd0" (UID: "e0a95e0b-6927-4aa4-9b07-994854d26bd0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.343872 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a95e0b-6927-4aa4-9b07-994854d26bd0-kube-api-access-mswtx" (OuterVolumeSpecName: "kube-api-access-mswtx") pod "e0a95e0b-6927-4aa4-9b07-994854d26bd0" (UID: "e0a95e0b-6927-4aa4-9b07-994854d26bd0"). InnerVolumeSpecName "kube-api-access-mswtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.367468 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e0a95e0b-6927-4aa4-9b07-994854d26bd0" (UID: "e0a95e0b-6927-4aa4-9b07-994854d26bd0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.408498 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0a95e0b-6927-4aa4-9b07-994854d26bd0" (UID: "e0a95e0b-6927-4aa4-9b07-994854d26bd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.432661 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0a95e0b-6927-4aa4-9b07-994854d26bd0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.432693 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.432705 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.432716 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mswtx\" (UniqueName: \"kubernetes.io/projected/e0a95e0b-6927-4aa4-9b07-994854d26bd0-kube-api-access-mswtx\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.432724 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.432732 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0a95e0b-6927-4aa4-9b07-994854d26bd0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.446409 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-config-data" (OuterVolumeSpecName: "config-data") pod "e0a95e0b-6927-4aa4-9b07-994854d26bd0" (UID: "e0a95e0b-6927-4aa4-9b07-994854d26bd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.478845 4832 generic.go:334] "Generic (PLEG): container finished" podID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerID="61d67540eccb8f2173b4972b8dd2ba753c322ecc9d31265f3dfed86b89e5ff6c" exitCode=0 Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.478942 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.478987 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0a95e0b-6927-4aa4-9b07-994854d26bd0","Type":"ContainerDied","Data":"61d67540eccb8f2173b4972b8dd2ba753c322ecc9d31265f3dfed86b89e5ff6c"} Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.481192 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0a95e0b-6927-4aa4-9b07-994854d26bd0","Type":"ContainerDied","Data":"dc8373145fcfe8d915a33ae94f3db9c428a1da6b437b93fc5f63e9d1893b8753"} Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.481231 4832 scope.go:117] "RemoveContainer" containerID="10877bb8c63d6b16e64f645805d5dfb367fa16d57da25f95fa26650520a2139e" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.483956 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4jdb8" event={"ID":"67ec4022-010e-4c03-8e2c-622261e37510","Type":"ContainerDied","Data":"12ec5c133636a61c502ed4b643a9d8dc03d7929c1556abf297dbd983d0cd3129"} Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.484004 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12ec5c133636a61c502ed4b643a9d8dc03d7929c1556abf297dbd983d0cd3129" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.484054 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4jdb8" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.511590 4832 scope.go:117] "RemoveContainer" containerID="95144bbe95ddbc673b2af2655197e280e22cf0db05f9a9ce491403fa8e7e3d59" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.534964 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a95e0b-6927-4aa4-9b07-994854d26bd0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.559461 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.559692 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.562764 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.573664 4832 scope.go:117] "RemoveContainer" containerID="8556dc57a770863308c28f760a180013457b3f76963be85392a3bd26a6191ffd" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.615759 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.636189 4832 scope.go:117] "RemoveContainer" containerID="61d67540eccb8f2173b4972b8dd2ba753c322ecc9d31265f3dfed86b89e5ff6c" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.661169 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:19 crc kubenswrapper[4832]: E0131 05:03:19.661781 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="ceilometer-notification-agent" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.661800 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="ceilometer-notification-agent" Jan 31 05:03:19 crc kubenswrapper[4832]: E0131 05:03:19.661821 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="proxy-httpd" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.661827 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="proxy-httpd" Jan 31 05:03:19 crc kubenswrapper[4832]: E0131 05:03:19.661843 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="sg-core" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.661849 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="sg-core" Jan 31 05:03:19 crc kubenswrapper[4832]: E0131 05:03:19.661878 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ec4022-010e-4c03-8e2c-622261e37510" containerName="nova-cell0-conductor-db-sync" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.661884 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ec4022-010e-4c03-8e2c-622261e37510" containerName="nova-cell0-conductor-db-sync" Jan 31 05:03:19 crc kubenswrapper[4832]: E0131 05:03:19.661895 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="ceilometer-central-agent" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.661904 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="ceilometer-central-agent" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.662094 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="sg-core" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.662124 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="ceilometer-notification-agent" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.662132 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="proxy-httpd" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.662140 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ec4022-010e-4c03-8e2c-622261e37510" containerName="nova-cell0-conductor-db-sync" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.662149 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" containerName="ceilometer-central-agent" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.664022 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.666703 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.667038 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.673646 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.687657 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.689168 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.691398 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.698732 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.699298 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-72mk8" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.699446 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.702895 4832 scope.go:117] "RemoveContainer" containerID="10877bb8c63d6b16e64f645805d5dfb367fa16d57da25f95fa26650520a2139e" Jan 31 05:03:19 crc kubenswrapper[4832]: E0131 05:03:19.703727 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10877bb8c63d6b16e64f645805d5dfb367fa16d57da25f95fa26650520a2139e\": container with ID starting with 10877bb8c63d6b16e64f645805d5dfb367fa16d57da25f95fa26650520a2139e not found: ID does not exist" containerID="10877bb8c63d6b16e64f645805d5dfb367fa16d57da25f95fa26650520a2139e" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.703770 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10877bb8c63d6b16e64f645805d5dfb367fa16d57da25f95fa26650520a2139e"} err="failed to get container status \"10877bb8c63d6b16e64f645805d5dfb367fa16d57da25f95fa26650520a2139e\": rpc error: code = NotFound desc = could not find container \"10877bb8c63d6b16e64f645805d5dfb367fa16d57da25f95fa26650520a2139e\": container with ID starting with 10877bb8c63d6b16e64f645805d5dfb367fa16d57da25f95fa26650520a2139e not found: ID does not exist" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.703803 4832 scope.go:117] "RemoveContainer" containerID="95144bbe95ddbc673b2af2655197e280e22cf0db05f9a9ce491403fa8e7e3d59" Jan 31 05:03:19 crc kubenswrapper[4832]: E0131 05:03:19.704201 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95144bbe95ddbc673b2af2655197e280e22cf0db05f9a9ce491403fa8e7e3d59\": container with ID starting with 95144bbe95ddbc673b2af2655197e280e22cf0db05f9a9ce491403fa8e7e3d59 not found: ID does not exist" containerID="95144bbe95ddbc673b2af2655197e280e22cf0db05f9a9ce491403fa8e7e3d59" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.704229 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95144bbe95ddbc673b2af2655197e280e22cf0db05f9a9ce491403fa8e7e3d59"} err="failed to get container status \"95144bbe95ddbc673b2af2655197e280e22cf0db05f9a9ce491403fa8e7e3d59\": rpc error: code = NotFound desc = could not find container \"95144bbe95ddbc673b2af2655197e280e22cf0db05f9a9ce491403fa8e7e3d59\": container with ID starting with 95144bbe95ddbc673b2af2655197e280e22cf0db05f9a9ce491403fa8e7e3d59 not found: ID does not exist" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.704246 4832 scope.go:117] "RemoveContainer" containerID="8556dc57a770863308c28f760a180013457b3f76963be85392a3bd26a6191ffd" Jan 31 05:03:19 crc kubenswrapper[4832]: E0131 05:03:19.704518 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8556dc57a770863308c28f760a180013457b3f76963be85392a3bd26a6191ffd\": container with ID starting with 8556dc57a770863308c28f760a180013457b3f76963be85392a3bd26a6191ffd not found: ID does not exist" containerID="8556dc57a770863308c28f760a180013457b3f76963be85392a3bd26a6191ffd" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.704571 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8556dc57a770863308c28f760a180013457b3f76963be85392a3bd26a6191ffd"} err="failed to get container status \"8556dc57a770863308c28f760a180013457b3f76963be85392a3bd26a6191ffd\": rpc error: code = NotFound desc = could not find container \"8556dc57a770863308c28f760a180013457b3f76963be85392a3bd26a6191ffd\": container with ID starting with 8556dc57a770863308c28f760a180013457b3f76963be85392a3bd26a6191ffd not found: ID does not exist" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.704602 4832 scope.go:117] "RemoveContainer" containerID="61d67540eccb8f2173b4972b8dd2ba753c322ecc9d31265f3dfed86b89e5ff6c" Jan 31 05:03:19 crc kubenswrapper[4832]: E0131 05:03:19.705181 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61d67540eccb8f2173b4972b8dd2ba753c322ecc9d31265f3dfed86b89e5ff6c\": container with ID starting with 61d67540eccb8f2173b4972b8dd2ba753c322ecc9d31265f3dfed86b89e5ff6c not found: ID does not exist" containerID="61d67540eccb8f2173b4972b8dd2ba753c322ecc9d31265f3dfed86b89e5ff6c" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.705227 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61d67540eccb8f2173b4972b8dd2ba753c322ecc9d31265f3dfed86b89e5ff6c"} err="failed to get container status \"61d67540eccb8f2173b4972b8dd2ba753c322ecc9d31265f3dfed86b89e5ff6c\": rpc error: code = NotFound desc = could not find container \"61d67540eccb8f2173b4972b8dd2ba753c322ecc9d31265f3dfed86b89e5ff6c\": container with ID starting with 61d67540eccb8f2173b4972b8dd2ba753c322ecc9d31265f3dfed86b89e5ff6c not found: ID does not exist" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.740640 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-scripts\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.740685 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-config-data\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.740732 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1631bf1-d29f-41d9-a70b-4441863ac0fb-run-httpd\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.740757 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qxql\" (UniqueName: \"kubernetes.io/projected/a1631bf1-d29f-41d9-a70b-4441863ac0fb-kube-api-access-2qxql\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.740781 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1631bf1-d29f-41d9-a70b-4441863ac0fb-log-httpd\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.740798 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.740944 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbhnb\" (UniqueName: \"kubernetes.io/projected/bbfa7e6d-7200-4b32-9749-d04865e74d5e-kube-api-access-wbhnb\") pod \"nova-cell0-conductor-0\" (UID: \"bbfa7e6d-7200-4b32-9749-d04865e74d5e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.740969 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfa7e6d-7200-4b32-9749-d04865e74d5e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bbfa7e6d-7200-4b32-9749-d04865e74d5e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.741030 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.741108 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfa7e6d-7200-4b32-9749-d04865e74d5e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bbfa7e6d-7200-4b32-9749-d04865e74d5e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.843754 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbhnb\" (UniqueName: \"kubernetes.io/projected/bbfa7e6d-7200-4b32-9749-d04865e74d5e-kube-api-access-wbhnb\") pod \"nova-cell0-conductor-0\" (UID: \"bbfa7e6d-7200-4b32-9749-d04865e74d5e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.843854 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfa7e6d-7200-4b32-9749-d04865e74d5e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bbfa7e6d-7200-4b32-9749-d04865e74d5e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.843926 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.844045 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfa7e6d-7200-4b32-9749-d04865e74d5e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bbfa7e6d-7200-4b32-9749-d04865e74d5e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.844119 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-scripts\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.844153 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-config-data\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.844228 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1631bf1-d29f-41d9-a70b-4441863ac0fb-run-httpd\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.844274 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qxql\" (UniqueName: \"kubernetes.io/projected/a1631bf1-d29f-41d9-a70b-4441863ac0fb-kube-api-access-2qxql\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.844322 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1631bf1-d29f-41d9-a70b-4441863ac0fb-log-httpd\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.844347 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.849522 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1631bf1-d29f-41d9-a70b-4441863ac0fb-run-httpd\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.850307 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-scripts\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.853852 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1631bf1-d29f-41d9-a70b-4441863ac0fb-log-httpd\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.853958 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.854386 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.854707 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfa7e6d-7200-4b32-9749-d04865e74d5e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bbfa7e6d-7200-4b32-9749-d04865e74d5e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.857470 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-config-data\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.873808 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfa7e6d-7200-4b32-9749-d04865e74d5e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bbfa7e6d-7200-4b32-9749-d04865e74d5e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.874206 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qxql\" (UniqueName: \"kubernetes.io/projected/a1631bf1-d29f-41d9-a70b-4441863ac0fb-kube-api-access-2qxql\") pod \"ceilometer-0\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " pod="openstack/ceilometer-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.878195 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbhnb\" (UniqueName: \"kubernetes.io/projected/bbfa7e6d-7200-4b32-9749-d04865e74d5e-kube-api-access-wbhnb\") pod \"nova-cell0-conductor-0\" (UID: \"bbfa7e6d-7200-4b32-9749-d04865e74d5e\") " pod="openstack/nova-cell0-conductor-0" Jan 31 05:03:19 crc kubenswrapper[4832]: I0131 05:03:19.879813 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a95e0b-6927-4aa4-9b07-994854d26bd0" path="/var/lib/kubelet/pods/e0a95e0b-6927-4aa4-9b07-994854d26bd0/volumes" Jan 31 05:03:20 crc kubenswrapper[4832]: I0131 05:03:20.014340 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:03:20 crc kubenswrapper[4832]: I0131 05:03:20.024611 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 05:03:20 crc kubenswrapper[4832]: W0131 05:03:20.608945 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbfa7e6d_7200_4b32_9749_d04865e74d5e.slice/crio-002376ad735a5fa5f39b0d79d6a64ec40ced2d1b1661281acfcea0b6a5df04b3 WatchSource:0}: Error finding container 002376ad735a5fa5f39b0d79d6a64ec40ced2d1b1661281acfcea0b6a5df04b3: Status 404 returned error can't find the container with id 002376ad735a5fa5f39b0d79d6a64ec40ced2d1b1661281acfcea0b6a5df04b3 Jan 31 05:03:20 crc kubenswrapper[4832]: I0131 05:03:20.613934 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 05:03:20 crc kubenswrapper[4832]: W0131 05:03:20.736336 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1631bf1_d29f_41d9_a70b_4441863ac0fb.slice/crio-567b80825d4d266702c145ee417379c51b86be791001fb2dd48d13ed6955cf8f WatchSource:0}: Error finding container 567b80825d4d266702c145ee417379c51b86be791001fb2dd48d13ed6955cf8f: Status 404 returned error can't find the container with id 567b80825d4d266702c145ee417379c51b86be791001fb2dd48d13ed6955cf8f Jan 31 05:03:20 crc kubenswrapper[4832]: I0131 05:03:20.736546 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:20 crc kubenswrapper[4832]: I0131 05:03:20.768155 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 05:03:20 crc kubenswrapper[4832]: I0131 05:03:20.768295 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 05:03:20 crc kubenswrapper[4832]: I0131 05:03:20.770817 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 05:03:21 crc kubenswrapper[4832]: I0131 05:03:21.514668 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1631bf1-d29f-41d9-a70b-4441863ac0fb","Type":"ContainerStarted","Data":"567b80825d4d266702c145ee417379c51b86be791001fb2dd48d13ed6955cf8f"} Jan 31 05:03:21 crc kubenswrapper[4832]: I0131 05:03:21.516500 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bbfa7e6d-7200-4b32-9749-d04865e74d5e","Type":"ContainerStarted","Data":"002376ad735a5fa5f39b0d79d6a64ec40ced2d1b1661281acfcea0b6a5df04b3"} Jan 31 05:03:22 crc kubenswrapper[4832]: I0131 05:03:22.526829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bbfa7e6d-7200-4b32-9749-d04865e74d5e","Type":"ContainerStarted","Data":"6ffa076cfe611ed442f63b231b2f765a2872202f4c8fd413d02adf693649d86c"} Jan 31 05:03:22 crc kubenswrapper[4832]: I0131 05:03:22.531295 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 31 05:03:22 crc kubenswrapper[4832]: I0131 05:03:22.531444 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1631bf1-d29f-41d9-a70b-4441863ac0fb","Type":"ContainerStarted","Data":"a2a7d673f85afd0cd5a894d6e7c5f70467f87f3b6b94d26016ff12c102341daa"} Jan 31 05:03:22 crc kubenswrapper[4832]: I0131 05:03:22.575806 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.575784787 podStartE2EDuration="3.575784787s" podCreationTimestamp="2026-01-31 05:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:03:22.54885466 +0000 UTC m=+1211.497676345" watchObservedRunningTime="2026-01-31 05:03:22.575784787 +0000 UTC m=+1211.524606482" Jan 31 05:03:23 crc kubenswrapper[4832]: I0131 05:03:23.549823 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1631bf1-d29f-41d9-a70b-4441863ac0fb","Type":"ContainerStarted","Data":"93080b0a65d16d8d75e60a94c07fb0b0b88535cddd72bb85dbe06cf45121b08f"} Jan 31 05:03:23 crc kubenswrapper[4832]: I0131 05:03:23.550249 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1631bf1-d29f-41d9-a70b-4441863ac0fb","Type":"ContainerStarted","Data":"abfeb396abe6feacb57b252ef44080f2f739993b4c1b9ecc72f5a90ad6a56477"} Jan 31 05:03:26 crc kubenswrapper[4832]: I0131 05:03:26.587613 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1631bf1-d29f-41d9-a70b-4441863ac0fb","Type":"ContainerStarted","Data":"aa24381ecb1d33f2ca7298eb215fef61e0f307b8adafac52e41c3d684c074b6f"} Jan 31 05:03:26 crc kubenswrapper[4832]: I0131 05:03:26.588467 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 05:03:26 crc kubenswrapper[4832]: I0131 05:03:26.636426 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.813709268 podStartE2EDuration="7.636401182s" podCreationTimestamp="2026-01-31 05:03:19 +0000 UTC" firstStartedPulling="2026-01-31 05:03:20.739223224 +0000 UTC m=+1209.688044919" lastFinishedPulling="2026-01-31 05:03:25.561915148 +0000 UTC m=+1214.510736833" observedRunningTime="2026-01-31 05:03:26.627212427 +0000 UTC m=+1215.576034132" watchObservedRunningTime="2026-01-31 05:03:26.636401182 +0000 UTC m=+1215.585222877" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.067634 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.692279 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-pgfgj"] Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.694103 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.697684 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.697823 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.705006 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pgfgj"] Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.832547 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pgfgj\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.832648 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-config-data\") pod \"nova-cell0-cell-mapping-pgfgj\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.833439 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-scripts\") pod \"nova-cell0-cell-mapping-pgfgj\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.833610 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpgjh\" (UniqueName: \"kubernetes.io/projected/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-kube-api-access-rpgjh\") pod \"nova-cell0-cell-mapping-pgfgj\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.936092 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-scripts\") pod \"nova-cell0-cell-mapping-pgfgj\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.936506 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpgjh\" (UniqueName: \"kubernetes.io/projected/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-kube-api-access-rpgjh\") pod \"nova-cell0-cell-mapping-pgfgj\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.936700 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pgfgj\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.936751 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-config-data\") pod \"nova-cell0-cell-mapping-pgfgj\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.945517 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-config-data\") pod \"nova-cell0-cell-mapping-pgfgj\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.951945 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-scripts\") pod \"nova-cell0-cell-mapping-pgfgj\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.968297 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pgfgj\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.970175 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpgjh\" (UniqueName: \"kubernetes.io/projected/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-kube-api-access-rpgjh\") pod \"nova-cell0-cell-mapping-pgfgj\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.997232 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:03:30 crc kubenswrapper[4832]: I0131 05:03:30.998827 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.000966 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.015309 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.017113 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.018976 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.019691 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.026269 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.036069 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.145218 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f4dfabe-ee65-4103-ab76-5beacb6e39d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.145295 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2873498f-86d9-4ce6-ba69-bb2585471123-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " pod="openstack/nova-api-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.145328 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk8tx\" (UniqueName: \"kubernetes.io/projected/2873498f-86d9-4ce6-ba69-bb2585471123-kube-api-access-xk8tx\") pod \"nova-api-0\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " pod="openstack/nova-api-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.145363 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2873498f-86d9-4ce6-ba69-bb2585471123-logs\") pod \"nova-api-0\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " pod="openstack/nova-api-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.145422 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2873498f-86d9-4ce6-ba69-bb2585471123-config-data\") pod \"nova-api-0\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " pod="openstack/nova-api-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.145460 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f4dfabe-ee65-4103-ab76-5beacb6e39d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.145483 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvdks\" (UniqueName: \"kubernetes.io/projected/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-kube-api-access-bvdks\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f4dfabe-ee65-4103-ab76-5beacb6e39d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.166198 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.167819 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.178167 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.204386 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.246989 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f4dfabe-ee65-4103-ab76-5beacb6e39d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.247046 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvdks\" (UniqueName: \"kubernetes.io/projected/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-kube-api-access-bvdks\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f4dfabe-ee65-4103-ab76-5beacb6e39d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.247127 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f4dfabe-ee65-4103-ab76-5beacb6e39d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.247176 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2873498f-86d9-4ce6-ba69-bb2585471123-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " pod="openstack/nova-api-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.247202 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk8tx\" (UniqueName: \"kubernetes.io/projected/2873498f-86d9-4ce6-ba69-bb2585471123-kube-api-access-xk8tx\") pod \"nova-api-0\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " pod="openstack/nova-api-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.247234 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2873498f-86d9-4ce6-ba69-bb2585471123-logs\") pod \"nova-api-0\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " pod="openstack/nova-api-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.247272 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2873498f-86d9-4ce6-ba69-bb2585471123-config-data\") pod \"nova-api-0\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " pod="openstack/nova-api-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.251868 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2873498f-86d9-4ce6-ba69-bb2585471123-logs\") pod \"nova-api-0\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " pod="openstack/nova-api-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.280255 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2873498f-86d9-4ce6-ba69-bb2585471123-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " pod="openstack/nova-api-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.283183 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2873498f-86d9-4ce6-ba69-bb2585471123-config-data\") pod \"nova-api-0\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " pod="openstack/nova-api-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.288449 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f4dfabe-ee65-4103-ab76-5beacb6e39d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.303841 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f4dfabe-ee65-4103-ab76-5beacb6e39d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.315244 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvdks\" (UniqueName: \"kubernetes.io/projected/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-kube-api-access-bvdks\") pod \"nova-cell1-novncproxy-0\" (UID: \"2f4dfabe-ee65-4103-ab76-5beacb6e39d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.331052 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk8tx\" (UniqueName: \"kubernetes.io/projected/2873498f-86d9-4ce6-ba69-bb2585471123-kube-api-access-xk8tx\") pod \"nova-api-0\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " pod="openstack/nova-api-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.350514 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab45d0f-8c40-4f39-b885-0579260c0763-config-data\") pod \"nova-scheduler-0\" (UID: \"0ab45d0f-8c40-4f39-b885-0579260c0763\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.350618 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw26s\" (UniqueName: \"kubernetes.io/projected/0ab45d0f-8c40-4f39-b885-0579260c0763-kube-api-access-mw26s\") pod \"nova-scheduler-0\" (UID: \"0ab45d0f-8c40-4f39-b885-0579260c0763\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.350668 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab45d0f-8c40-4f39-b885-0579260c0763-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0ab45d0f-8c40-4f39-b885-0579260c0763\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.441139 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.453127 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.453789 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab45d0f-8c40-4f39-b885-0579260c0763-config-data\") pod \"nova-scheduler-0\" (UID: \"0ab45d0f-8c40-4f39-b885-0579260c0763\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.453934 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw26s\" (UniqueName: \"kubernetes.io/projected/0ab45d0f-8c40-4f39-b885-0579260c0763-kube-api-access-mw26s\") pod \"nova-scheduler-0\" (UID: \"0ab45d0f-8c40-4f39-b885-0579260c0763\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.454011 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab45d0f-8c40-4f39-b885-0579260c0763-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0ab45d0f-8c40-4f39-b885-0579260c0763\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.463032 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab45d0f-8c40-4f39-b885-0579260c0763-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0ab45d0f-8c40-4f39-b885-0579260c0763\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.463543 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.466220 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.520711 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.577123 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dtwk\" (UniqueName: \"kubernetes.io/projected/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-kube-api-access-8dtwk\") pod \"nova-metadata-0\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " pod="openstack/nova-metadata-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.577230 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " pod="openstack/nova-metadata-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.577387 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-config-data\") pod \"nova-metadata-0\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " pod="openstack/nova-metadata-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.577595 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-logs\") pod \"nova-metadata-0\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " pod="openstack/nova-metadata-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.619411 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab45d0f-8c40-4f39-b885-0579260c0763-config-data\") pod \"nova-scheduler-0\" (UID: \"0ab45d0f-8c40-4f39-b885-0579260c0763\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.637449 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw26s\" (UniqueName: \"kubernetes.io/projected/0ab45d0f-8c40-4f39-b885-0579260c0763-kube-api-access-mw26s\") pod \"nova-scheduler-0\" (UID: \"0ab45d0f-8c40-4f39-b885-0579260c0763\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.672006 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.686016 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-logs\") pod \"nova-metadata-0\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " pod="openstack/nova-metadata-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.686183 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dtwk\" (UniqueName: \"kubernetes.io/projected/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-kube-api-access-8dtwk\") pod \"nova-metadata-0\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " pod="openstack/nova-metadata-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.686213 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " pod="openstack/nova-metadata-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.686246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-config-data\") pod \"nova-metadata-0\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " pod="openstack/nova-metadata-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.687928 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-logs\") pod \"nova-metadata-0\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " pod="openstack/nova-metadata-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.720311 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " pod="openstack/nova-metadata-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.724832 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-config-data\") pod \"nova-metadata-0\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " pod="openstack/nova-metadata-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.725591 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bv49m"] Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.727526 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.735988 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dtwk\" (UniqueName: \"kubernetes.io/projected/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-kube-api-access-8dtwk\") pod \"nova-metadata-0\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " pod="openstack/nova-metadata-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.773474 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bv49m"] Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.802639 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.815687 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.899425 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.899508 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-dns-svc\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.899538 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-config\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.899602 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzfw9\" (UniqueName: \"kubernetes.io/projected/f8c28478-7f9b-4d59-8b06-b434b9110244-kube-api-access-lzfw9\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.899657 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:31 crc kubenswrapper[4832]: I0131 05:03:31.899770 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.001906 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-dns-svc\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.002284 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-config\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.002329 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzfw9\" (UniqueName: \"kubernetes.io/projected/f8c28478-7f9b-4d59-8b06-b434b9110244-kube-api-access-lzfw9\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.002369 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.002438 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.002479 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.002970 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-dns-svc\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.003340 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.004044 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.004086 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.004110 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-config\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.016445 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pgfgj"] Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.032333 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzfw9\" (UniqueName: \"kubernetes.io/projected/f8c28478-7f9b-4d59-8b06-b434b9110244-kube-api-access-lzfw9\") pod \"dnsmasq-dns-bccf8f775-bv49m\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.064510 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.262588 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:03:32 crc kubenswrapper[4832]: W0131 05:03:32.305361 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f4dfabe_ee65_4103_ab76_5beacb6e39d0.slice/crio-a0569d878dd486c965a80dbe8667501bbfe8931d37374149f221eec4589b4d4c WatchSource:0}: Error finding container a0569d878dd486c965a80dbe8667501bbfe8931d37374149f221eec4589b4d4c: Status 404 returned error can't find the container with id a0569d878dd486c965a80dbe8667501bbfe8931d37374149f221eec4589b4d4c Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.355443 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bx446"] Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.356973 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.362940 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.363261 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.375110 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bx446"] Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.412196 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbcrw\" (UniqueName: \"kubernetes.io/projected/1348b78a-eddf-4b18-b3ab-3aa70968678f-kube-api-access-dbcrw\") pod \"nova-cell1-conductor-db-sync-bx446\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.412402 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-scripts\") pod \"nova-cell1-conductor-db-sync-bx446\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.412622 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bx446\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.412709 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-config-data\") pod \"nova-cell1-conductor-db-sync-bx446\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.483104 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.516014 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbcrw\" (UniqueName: \"kubernetes.io/projected/1348b78a-eddf-4b18-b3ab-3aa70968678f-kube-api-access-dbcrw\") pod \"nova-cell1-conductor-db-sync-bx446\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.516075 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-scripts\") pod \"nova-cell1-conductor-db-sync-bx446\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.516137 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bx446\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.516175 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-config-data\") pod \"nova-cell1-conductor-db-sync-bx446\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.525554 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-scripts\") pod \"nova-cell1-conductor-db-sync-bx446\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.526242 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bx446\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.530044 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-config-data\") pod \"nova-cell1-conductor-db-sync-bx446\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.536216 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbcrw\" (UniqueName: \"kubernetes.io/projected/1348b78a-eddf-4b18-b3ab-3aa70968678f-kube-api-access-dbcrw\") pod \"nova-cell1-conductor-db-sync-bx446\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.588058 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.685180 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.710934 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.743317 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2f4dfabe-ee65-4103-ab76-5beacb6e39d0","Type":"ContainerStarted","Data":"a0569d878dd486c965a80dbe8667501bbfe8931d37374149f221eec4589b4d4c"} Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.747717 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pgfgj" event={"ID":"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5","Type":"ContainerStarted","Data":"921950b98c373d71bed145fc150ae8ae81dd172c85b0b97c772f65f2f9659991"} Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.747768 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pgfgj" event={"ID":"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5","Type":"ContainerStarted","Data":"57c9c90c23fab8d15bf1e848066ce22b18d626d08bc4350313d8a52e169c2b3f"} Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.753626 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0ab45d0f-8c40-4f39-b885-0579260c0763","Type":"ContainerStarted","Data":"238fdaac1d09a6654602498c1425aca83924b161c74155e234238d8ae38e41e3"} Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.761798 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2873498f-86d9-4ce6-ba69-bb2585471123","Type":"ContainerStarted","Data":"b46a8cd67f0028f9ab126fa5a404b9324adf8d018c1dc893221de7cdea179f64"} Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.769051 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-pgfgj" podStartSLOduration=2.769036412 podStartE2EDuration="2.769036412s" podCreationTimestamp="2026-01-31 05:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:03:32.768209126 +0000 UTC m=+1221.717030811" watchObservedRunningTime="2026-01-31 05:03:32.769036412 +0000 UTC m=+1221.717858097" Jan 31 05:03:32 crc kubenswrapper[4832]: I0131 05:03:32.839972 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bv49m"] Jan 31 05:03:33 crc kubenswrapper[4832]: I0131 05:03:33.283013 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bx446"] Jan 31 05:03:33 crc kubenswrapper[4832]: W0131 05:03:33.329917 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1348b78a_eddf_4b18_b3ab_3aa70968678f.slice/crio-4b03e6a2001e94380b7ec2c7e79cc80dad9101b41cd87948f49ebf03009c313d WatchSource:0}: Error finding container 4b03e6a2001e94380b7ec2c7e79cc80dad9101b41cd87948f49ebf03009c313d: Status 404 returned error can't find the container with id 4b03e6a2001e94380b7ec2c7e79cc80dad9101b41cd87948f49ebf03009c313d Jan 31 05:03:33 crc kubenswrapper[4832]: I0131 05:03:33.783324 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bx446" event={"ID":"1348b78a-eddf-4b18-b3ab-3aa70968678f","Type":"ContainerStarted","Data":"179615d10f1bc54b71d799e1536b72a837c5b802a71ab86f3c9bf1a071c9ecb2"} Jan 31 05:03:33 crc kubenswrapper[4832]: I0131 05:03:33.783717 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bx446" event={"ID":"1348b78a-eddf-4b18-b3ab-3aa70968678f","Type":"ContainerStarted","Data":"4b03e6a2001e94380b7ec2c7e79cc80dad9101b41cd87948f49ebf03009c313d"} Jan 31 05:03:33 crc kubenswrapper[4832]: I0131 05:03:33.786449 4832 generic.go:334] "Generic (PLEG): container finished" podID="f8c28478-7f9b-4d59-8b06-b434b9110244" containerID="27e3435cb8b4ae4586c096ad64510ae5e7ed2a51d3ea2c917735fac608d826ee" exitCode=0 Jan 31 05:03:33 crc kubenswrapper[4832]: I0131 05:03:33.786628 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bv49m" event={"ID":"f8c28478-7f9b-4d59-8b06-b434b9110244","Type":"ContainerDied","Data":"27e3435cb8b4ae4586c096ad64510ae5e7ed2a51d3ea2c917735fac608d826ee"} Jan 31 05:03:33 crc kubenswrapper[4832]: I0131 05:03:33.786698 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bv49m" event={"ID":"f8c28478-7f9b-4d59-8b06-b434b9110244","Type":"ContainerStarted","Data":"c71f13f56ac20169ec3c2ee00c31b16941fcda7250d71483b4cd91638179b88e"} Jan 31 05:03:33 crc kubenswrapper[4832]: I0131 05:03:33.792850 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8","Type":"ContainerStarted","Data":"d6fd2bfde498ca2571e5a9bbd2688134621dec0d46815cfa849b312509483e9c"} Jan 31 05:03:33 crc kubenswrapper[4832]: I0131 05:03:33.804543 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bx446" podStartSLOduration=1.804527213 podStartE2EDuration="1.804527213s" podCreationTimestamp="2026-01-31 05:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:03:33.798849556 +0000 UTC m=+1222.747671241" watchObservedRunningTime="2026-01-31 05:03:33.804527213 +0000 UTC m=+1222.753348898" Jan 31 05:03:34 crc kubenswrapper[4832]: I0131 05:03:34.879399 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:03:34 crc kubenswrapper[4832]: I0131 05:03:34.889989 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.835029 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0ab45d0f-8c40-4f39-b885-0579260c0763","Type":"ContainerStarted","Data":"8013601b4c6ba6508d5f07c21ab0442c4b1802eec7e30db1caa825d6100ee396"} Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.838804 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2873498f-86d9-4ce6-ba69-bb2585471123","Type":"ContainerStarted","Data":"9c220251c55b8e3f7666265ed0a32ba018a88be0f74787678be470474106d09b"} Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.838845 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2873498f-86d9-4ce6-ba69-bb2585471123","Type":"ContainerStarted","Data":"08d9e1da07c7fe1522a870de3852911209907f999af27db9911fe69ab7fdb037"} Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.842241 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8","Type":"ContainerStarted","Data":"1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001"} Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.842288 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8","Type":"ContainerStarted","Data":"7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f"} Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.842455 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7ca791e7-c6b1-40f5-a80c-5e71013e2eb8" containerName="nova-metadata-log" containerID="cri-o://7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f" gracePeriod=30 Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.842603 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7ca791e7-c6b1-40f5-a80c-5e71013e2eb8" containerName="nova-metadata-metadata" containerID="cri-o://1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001" gracePeriod=30 Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.853664 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2f4dfabe-ee65-4103-ab76-5beacb6e39d0","Type":"ContainerStarted","Data":"a33038ca5d354169a84433d12de625f5f17b9b18ad2997efe925ebd091734dbe"} Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.853777 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="2f4dfabe-ee65-4103-ab76-5beacb6e39d0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a33038ca5d354169a84433d12de625f5f17b9b18ad2997efe925ebd091734dbe" gracePeriod=30 Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.864316 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bv49m" event={"ID":"f8c28478-7f9b-4d59-8b06-b434b9110244","Type":"ContainerStarted","Data":"3890e9ab04c1da55c46a9e05bc800806f5f369b6aaa87ce1f5ee52eacbd709c6"} Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.865079 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.902829 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7391581499999997 podStartE2EDuration="5.902800921s" podCreationTimestamp="2026-01-31 05:03:31 +0000 UTC" firstStartedPulling="2026-01-31 05:03:32.598658314 +0000 UTC m=+1221.547479999" lastFinishedPulling="2026-01-31 05:03:35.762301075 +0000 UTC m=+1224.711122770" observedRunningTime="2026-01-31 05:03:36.875786921 +0000 UTC m=+1225.824608606" watchObservedRunningTime="2026-01-31 05:03:36.902800921 +0000 UTC m=+1225.851622606" Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.926050 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.473667693 podStartE2EDuration="6.926023523s" podCreationTimestamp="2026-01-31 05:03:30 +0000 UTC" firstStartedPulling="2026-01-31 05:03:32.31043035 +0000 UTC m=+1221.259252035" lastFinishedPulling="2026-01-31 05:03:35.76278616 +0000 UTC m=+1224.711607865" observedRunningTime="2026-01-31 05:03:36.903272046 +0000 UTC m=+1225.852093731" watchObservedRunningTime="2026-01-31 05:03:36.926023523 +0000 UTC m=+1225.874845208" Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.952923 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.933463774 podStartE2EDuration="5.95290037s" podCreationTimestamp="2026-01-31 05:03:31 +0000 UTC" firstStartedPulling="2026-01-31 05:03:32.761758286 +0000 UTC m=+1221.710579971" lastFinishedPulling="2026-01-31 05:03:35.781194862 +0000 UTC m=+1224.730016567" observedRunningTime="2026-01-31 05:03:36.926711355 +0000 UTC m=+1225.875533040" watchObservedRunningTime="2026-01-31 05:03:36.95290037 +0000 UTC m=+1225.901722055" Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.953291 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-bv49m" podStartSLOduration=5.953286601 podStartE2EDuration="5.953286601s" podCreationTimestamp="2026-01-31 05:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:03:36.951748294 +0000 UTC m=+1225.900569979" watchObservedRunningTime="2026-01-31 05:03:36.953286601 +0000 UTC m=+1225.902108286" Jan 31 05:03:36 crc kubenswrapper[4832]: I0131 05:03:36.986883 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.709045644 podStartE2EDuration="6.986862145s" podCreationTimestamp="2026-01-31 05:03:30 +0000 UTC" firstStartedPulling="2026-01-31 05:03:32.483601986 +0000 UTC m=+1221.432423671" lastFinishedPulling="2026-01-31 05:03:35.761418477 +0000 UTC m=+1224.710240172" observedRunningTime="2026-01-31 05:03:36.969774215 +0000 UTC m=+1225.918595900" watchObservedRunningTime="2026-01-31 05:03:36.986862145 +0000 UTC m=+1225.935683850" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.598711 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.657266 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dtwk\" (UniqueName: \"kubernetes.io/projected/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-kube-api-access-8dtwk\") pod \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.657334 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-combined-ca-bundle\") pod \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.657380 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-logs\") pod \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.657532 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-config-data\") pod \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\" (UID: \"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8\") " Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.658950 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-logs" (OuterVolumeSpecName: "logs") pod "7ca791e7-c6b1-40f5-a80c-5e71013e2eb8" (UID: "7ca791e7-c6b1-40f5-a80c-5e71013e2eb8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.666403 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-kube-api-access-8dtwk" (OuterVolumeSpecName: "kube-api-access-8dtwk") pod "7ca791e7-c6b1-40f5-a80c-5e71013e2eb8" (UID: "7ca791e7-c6b1-40f5-a80c-5e71013e2eb8"). InnerVolumeSpecName "kube-api-access-8dtwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.692109 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ca791e7-c6b1-40f5-a80c-5e71013e2eb8" (UID: "7ca791e7-c6b1-40f5-a80c-5e71013e2eb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.694216 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-config-data" (OuterVolumeSpecName: "config-data") pod "7ca791e7-c6b1-40f5-a80c-5e71013e2eb8" (UID: "7ca791e7-c6b1-40f5-a80c-5e71013e2eb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.760524 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.760995 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dtwk\" (UniqueName: \"kubernetes.io/projected/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-kube-api-access-8dtwk\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.761067 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.761141 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.879085 4832 generic.go:334] "Generic (PLEG): container finished" podID="7ca791e7-c6b1-40f5-a80c-5e71013e2eb8" containerID="1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001" exitCode=0 Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.879124 4832 generic.go:334] "Generic (PLEG): container finished" podID="7ca791e7-c6b1-40f5-a80c-5e71013e2eb8" containerID="7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f" exitCode=143 Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.879250 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8","Type":"ContainerDied","Data":"1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001"} Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.879281 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8","Type":"ContainerDied","Data":"7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f"} Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.879294 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7ca791e7-c6b1-40f5-a80c-5e71013e2eb8","Type":"ContainerDied","Data":"d6fd2bfde498ca2571e5a9bbd2688134621dec0d46815cfa849b312509483e9c"} Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.879312 4832 scope.go:117] "RemoveContainer" containerID="1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.881355 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.928222 4832 scope.go:117] "RemoveContainer" containerID="7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.934964 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.950714 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.970098 4832 scope.go:117] "RemoveContainer" containerID="1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001" Jan 31 05:03:37 crc kubenswrapper[4832]: E0131 05:03:37.977444 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001\": container with ID starting with 1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001 not found: ID does not exist" containerID="1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.977498 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001"} err="failed to get container status \"1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001\": rpc error: code = NotFound desc = could not find container \"1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001\": container with ID starting with 1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001 not found: ID does not exist" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.977530 4832 scope.go:117] "RemoveContainer" containerID="7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.979798 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:03:37 crc kubenswrapper[4832]: E0131 05:03:37.980472 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca791e7-c6b1-40f5-a80c-5e71013e2eb8" containerName="nova-metadata-metadata" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.980501 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca791e7-c6b1-40f5-a80c-5e71013e2eb8" containerName="nova-metadata-metadata" Jan 31 05:03:37 crc kubenswrapper[4832]: E0131 05:03:37.980550 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca791e7-c6b1-40f5-a80c-5e71013e2eb8" containerName="nova-metadata-log" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.980631 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca791e7-c6b1-40f5-a80c-5e71013e2eb8" containerName="nova-metadata-log" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.980905 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ca791e7-c6b1-40f5-a80c-5e71013e2eb8" containerName="nova-metadata-log" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.980933 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ca791e7-c6b1-40f5-a80c-5e71013e2eb8" containerName="nova-metadata-metadata" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.982531 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:03:37 crc kubenswrapper[4832]: E0131 05:03:37.986361 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f\": container with ID starting with 7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f not found: ID does not exist" containerID="7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.986421 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f"} err="failed to get container status \"7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f\": rpc error: code = NotFound desc = could not find container \"7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f\": container with ID starting with 7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f not found: ID does not exist" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.986460 4832 scope.go:117] "RemoveContainer" containerID="1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.986467 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.988665 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001"} err="failed to get container status \"1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001\": rpc error: code = NotFound desc = could not find container \"1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001\": container with ID starting with 1f02fbd0422e1e0697eca326ff7160b443f0fa67aa16f152a6c910bc5a494001 not found: ID does not exist" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.988699 4832 scope.go:117] "RemoveContainer" containerID="7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.988868 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.990672 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f"} err="failed to get container status \"7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f\": rpc error: code = NotFound desc = could not find container \"7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f\": container with ID starting with 7f41a4bbb8b92ac544aabfba7d16a8b0d67e6232999992cfcc35c14d8e76425f not found: ID does not exist" Jan 31 05:03:37 crc kubenswrapper[4832]: I0131 05:03:37.997670 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.176249 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.176535 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78fe71af-9855-460b-ac9a-dcc7524704c1-logs\") pod \"nova-metadata-0\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.176613 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-config-data\") pod \"nova-metadata-0\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.176647 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.177039 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7skt\" (UniqueName: \"kubernetes.io/projected/78fe71af-9855-460b-ac9a-dcc7524704c1-kube-api-access-n7skt\") pod \"nova-metadata-0\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.279980 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78fe71af-9855-460b-ac9a-dcc7524704c1-logs\") pod \"nova-metadata-0\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.280071 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-config-data\") pod \"nova-metadata-0\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.280109 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.280273 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7skt\" (UniqueName: \"kubernetes.io/projected/78fe71af-9855-460b-ac9a-dcc7524704c1-kube-api-access-n7skt\") pod \"nova-metadata-0\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.280396 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.280804 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78fe71af-9855-460b-ac9a-dcc7524704c1-logs\") pod \"nova-metadata-0\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.297586 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.297622 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-config-data\") pod \"nova-metadata-0\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.300123 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.319033 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7skt\" (UniqueName: \"kubernetes.io/projected/78fe71af-9855-460b-ac9a-dcc7524704c1-kube-api-access-n7skt\") pod \"nova-metadata-0\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.322279 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.824865 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:03:38 crc kubenswrapper[4832]: I0131 05:03:38.894865 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78fe71af-9855-460b-ac9a-dcc7524704c1","Type":"ContainerStarted","Data":"87eae19fa629485a07750c96b463e05542b3c583acca079dea32461fa2936a03"} Jan 31 05:03:39 crc kubenswrapper[4832]: I0131 05:03:39.879775 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca791e7-c6b1-40f5-a80c-5e71013e2eb8" path="/var/lib/kubelet/pods/7ca791e7-c6b1-40f5-a80c-5e71013e2eb8/volumes" Jan 31 05:03:39 crc kubenswrapper[4832]: I0131 05:03:39.911953 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78fe71af-9855-460b-ac9a-dcc7524704c1","Type":"ContainerStarted","Data":"d9f5bd38b352b896a0d1dc94bbe071055a23297f41d8731466a739fb6699cdd1"} Jan 31 05:03:39 crc kubenswrapper[4832]: I0131 05:03:39.912042 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78fe71af-9855-460b-ac9a-dcc7524704c1","Type":"ContainerStarted","Data":"bbee4edd05ca5486a12bca6d397b72419cea58b2490378a559946cdf7145a6e6"} Jan 31 05:03:39 crc kubenswrapper[4832]: I0131 05:03:39.942905 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9428815310000003 podStartE2EDuration="2.942881531s" podCreationTimestamp="2026-01-31 05:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:03:39.937789891 +0000 UTC m=+1228.886611626" watchObservedRunningTime="2026-01-31 05:03:39.942881531 +0000 UTC m=+1228.891703226" Jan 31 05:03:40 crc kubenswrapper[4832]: I0131 05:03:40.933053 4832 generic.go:334] "Generic (PLEG): container finished" podID="d9897d32-fbf9-4266-ae7e-2c9ec76b65c5" containerID="921950b98c373d71bed145fc150ae8ae81dd172c85b0b97c772f65f2f9659991" exitCode=0 Jan 31 05:03:40 crc kubenswrapper[4832]: I0131 05:03:40.933123 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pgfgj" event={"ID":"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5","Type":"ContainerDied","Data":"921950b98c373d71bed145fc150ae8ae81dd172c85b0b97c772f65f2f9659991"} Jan 31 05:03:40 crc kubenswrapper[4832]: I0131 05:03:40.937352 4832 generic.go:334] "Generic (PLEG): container finished" podID="1348b78a-eddf-4b18-b3ab-3aa70968678f" containerID="179615d10f1bc54b71d799e1536b72a837c5b802a71ab86f3c9bf1a071c9ecb2" exitCode=0 Jan 31 05:03:40 crc kubenswrapper[4832]: I0131 05:03:40.938727 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bx446" event={"ID":"1348b78a-eddf-4b18-b3ab-3aa70968678f","Type":"ContainerDied","Data":"179615d10f1bc54b71d799e1536b72a837c5b802a71ab86f3c9bf1a071c9ecb2"} Jan 31 05:03:41 crc kubenswrapper[4832]: I0131 05:03:41.467872 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:03:41 crc kubenswrapper[4832]: I0131 05:03:41.522203 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 05:03:41 crc kubenswrapper[4832]: I0131 05:03:41.522267 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 05:03:41 crc kubenswrapper[4832]: I0131 05:03:41.804053 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 05:03:41 crc kubenswrapper[4832]: I0131 05:03:41.806165 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 05:03:41 crc kubenswrapper[4832]: I0131 05:03:41.856809 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.014682 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.067943 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.143783 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-b2hqg"] Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.144093 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" podUID="8e019ace-3599-4661-8577-79ecf77e6011" containerName="dnsmasq-dns" containerID="cri-o://74236e7651c1bfd64f131e69e7ec9c066ebaa8561368d8a2ee83ba2861c7be7b" gracePeriod=10 Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.542921 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.574197 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.612198 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2873498f-86d9-4ce6-ba69-bb2585471123" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.612625 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2873498f-86d9-4ce6-ba69-bb2585471123" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.699511 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-config-data\") pod \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.699600 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-combined-ca-bundle\") pod \"1348b78a-eddf-4b18-b3ab-3aa70968678f\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.699732 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-scripts\") pod \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.699793 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-scripts\") pod \"1348b78a-eddf-4b18-b3ab-3aa70968678f\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.699840 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpgjh\" (UniqueName: \"kubernetes.io/projected/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-kube-api-access-rpgjh\") pod \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.699924 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbcrw\" (UniqueName: \"kubernetes.io/projected/1348b78a-eddf-4b18-b3ab-3aa70968678f-kube-api-access-dbcrw\") pod \"1348b78a-eddf-4b18-b3ab-3aa70968678f\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.699987 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-combined-ca-bundle\") pod \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\" (UID: \"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5\") " Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.700052 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-config-data\") pod \"1348b78a-eddf-4b18-b3ab-3aa70968678f\" (UID: \"1348b78a-eddf-4b18-b3ab-3aa70968678f\") " Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.760114 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-scripts" (OuterVolumeSpecName: "scripts") pod "1348b78a-eddf-4b18-b3ab-3aa70968678f" (UID: "1348b78a-eddf-4b18-b3ab-3aa70968678f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.760231 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-scripts" (OuterVolumeSpecName: "scripts") pod "d9897d32-fbf9-4266-ae7e-2c9ec76b65c5" (UID: "d9897d32-fbf9-4266-ae7e-2c9ec76b65c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.760244 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-kube-api-access-rpgjh" (OuterVolumeSpecName: "kube-api-access-rpgjh") pod "d9897d32-fbf9-4266-ae7e-2c9ec76b65c5" (UID: "d9897d32-fbf9-4266-ae7e-2c9ec76b65c5"). InnerVolumeSpecName "kube-api-access-rpgjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.760381 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1348b78a-eddf-4b18-b3ab-3aa70968678f-kube-api-access-dbcrw" (OuterVolumeSpecName: "kube-api-access-dbcrw") pod "1348b78a-eddf-4b18-b3ab-3aa70968678f" (UID: "1348b78a-eddf-4b18-b3ab-3aa70968678f"). InnerVolumeSpecName "kube-api-access-dbcrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.772207 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-config-data" (OuterVolumeSpecName: "config-data") pod "d9897d32-fbf9-4266-ae7e-2c9ec76b65c5" (UID: "d9897d32-fbf9-4266-ae7e-2c9ec76b65c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.776778 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1348b78a-eddf-4b18-b3ab-3aa70968678f" (UID: "1348b78a-eddf-4b18-b3ab-3aa70968678f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.806846 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.806885 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.806896 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.806904 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.806912 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpgjh\" (UniqueName: \"kubernetes.io/projected/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-kube-api-access-rpgjh\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.806922 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbcrw\" (UniqueName: \"kubernetes.io/projected/1348b78a-eddf-4b18-b3ab-3aa70968678f-kube-api-access-dbcrw\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.822382 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9897d32-fbf9-4266-ae7e-2c9ec76b65c5" (UID: "d9897d32-fbf9-4266-ae7e-2c9ec76b65c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.826666 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-config-data" (OuterVolumeSpecName: "config-data") pod "1348b78a-eddf-4b18-b3ab-3aa70968678f" (UID: "1348b78a-eddf-4b18-b3ab-3aa70968678f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.842312 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.909035 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.909091 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1348b78a-eddf-4b18-b3ab-3aa70968678f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.963146 4832 generic.go:334] "Generic (PLEG): container finished" podID="8e019ace-3599-4661-8577-79ecf77e6011" containerID="74236e7651c1bfd64f131e69e7ec9c066ebaa8561368d8a2ee83ba2861c7be7b" exitCode=0 Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.963528 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" event={"ID":"8e019ace-3599-4661-8577-79ecf77e6011","Type":"ContainerDied","Data":"74236e7651c1bfd64f131e69e7ec9c066ebaa8561368d8a2ee83ba2861c7be7b"} Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.963737 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" event={"ID":"8e019ace-3599-4661-8577-79ecf77e6011","Type":"ContainerDied","Data":"c8489cd81e77a42293fa97ed8bcf541a39402dec56f5b350f35755b32102027a"} Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.963863 4832 scope.go:117] "RemoveContainer" containerID="74236e7651c1bfd64f131e69e7ec9c066ebaa8561368d8a2ee83ba2861c7be7b" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.963912 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-b2hqg" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.969943 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bx446" event={"ID":"1348b78a-eddf-4b18-b3ab-3aa70968678f","Type":"ContainerDied","Data":"4b03e6a2001e94380b7ec2c7e79cc80dad9101b41cd87948f49ebf03009c313d"} Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.970101 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b03e6a2001e94380b7ec2c7e79cc80dad9101b41cd87948f49ebf03009c313d" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.970272 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bx446" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.982798 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pgfgj" Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.986699 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pgfgj" event={"ID":"d9897d32-fbf9-4266-ae7e-2c9ec76b65c5","Type":"ContainerDied","Data":"57c9c90c23fab8d15bf1e848066ce22b18d626d08bc4350313d8a52e169c2b3f"} Jan 31 05:03:42 crc kubenswrapper[4832]: I0131 05:03:42.986735 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57c9c90c23fab8d15bf1e848066ce22b18d626d08bc4350313d8a52e169c2b3f" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:42.997508 4832 scope.go:117] "RemoveContainer" containerID="ed864b01bb51c54f506f910b455165a7f517f157e9bccb96f729b2354e1e9ede" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.009892 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-ovsdbserver-nb\") pod \"8e019ace-3599-4661-8577-79ecf77e6011\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.010042 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-dns-swift-storage-0\") pod \"8e019ace-3599-4661-8577-79ecf77e6011\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.010172 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-ovsdbserver-sb\") pod \"8e019ace-3599-4661-8577-79ecf77e6011\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.010246 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-config\") pod \"8e019ace-3599-4661-8577-79ecf77e6011\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.010303 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-dns-svc\") pod \"8e019ace-3599-4661-8577-79ecf77e6011\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.010320 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x24k6\" (UniqueName: \"kubernetes.io/projected/8e019ace-3599-4661-8577-79ecf77e6011-kube-api-access-x24k6\") pod \"8e019ace-3599-4661-8577-79ecf77e6011\" (UID: \"8e019ace-3599-4661-8577-79ecf77e6011\") " Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.041346 4832 scope.go:117] "RemoveContainer" containerID="74236e7651c1bfd64f131e69e7ec9c066ebaa8561368d8a2ee83ba2861c7be7b" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.042529 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e019ace-3599-4661-8577-79ecf77e6011-kube-api-access-x24k6" (OuterVolumeSpecName: "kube-api-access-x24k6") pod "8e019ace-3599-4661-8577-79ecf77e6011" (UID: "8e019ace-3599-4661-8577-79ecf77e6011"). InnerVolumeSpecName "kube-api-access-x24k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:03:43 crc kubenswrapper[4832]: E0131 05:03:43.043311 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74236e7651c1bfd64f131e69e7ec9c066ebaa8561368d8a2ee83ba2861c7be7b\": container with ID starting with 74236e7651c1bfd64f131e69e7ec9c066ebaa8561368d8a2ee83ba2861c7be7b not found: ID does not exist" containerID="74236e7651c1bfd64f131e69e7ec9c066ebaa8561368d8a2ee83ba2861c7be7b" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.043368 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74236e7651c1bfd64f131e69e7ec9c066ebaa8561368d8a2ee83ba2861c7be7b"} err="failed to get container status \"74236e7651c1bfd64f131e69e7ec9c066ebaa8561368d8a2ee83ba2861c7be7b\": rpc error: code = NotFound desc = could not find container \"74236e7651c1bfd64f131e69e7ec9c066ebaa8561368d8a2ee83ba2861c7be7b\": container with ID starting with 74236e7651c1bfd64f131e69e7ec9c066ebaa8561368d8a2ee83ba2861c7be7b not found: ID does not exist" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.043404 4832 scope.go:117] "RemoveContainer" containerID="ed864b01bb51c54f506f910b455165a7f517f157e9bccb96f729b2354e1e9ede" Jan 31 05:03:43 crc kubenswrapper[4832]: E0131 05:03:43.044101 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed864b01bb51c54f506f910b455165a7f517f157e9bccb96f729b2354e1e9ede\": container with ID starting with ed864b01bb51c54f506f910b455165a7f517f157e9bccb96f729b2354e1e9ede not found: ID does not exist" containerID="ed864b01bb51c54f506f910b455165a7f517f157e9bccb96f729b2354e1e9ede" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.044130 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed864b01bb51c54f506f910b455165a7f517f157e9bccb96f729b2354e1e9ede"} err="failed to get container status \"ed864b01bb51c54f506f910b455165a7f517f157e9bccb96f729b2354e1e9ede\": rpc error: code = NotFound desc = could not find container \"ed864b01bb51c54f506f910b455165a7f517f157e9bccb96f729b2354e1e9ede\": container with ID starting with ed864b01bb51c54f506f910b455165a7f517f157e9bccb96f729b2354e1e9ede not found: ID does not exist" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.091942 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8e019ace-3599-4661-8577-79ecf77e6011" (UID: "8e019ace-3599-4661-8577-79ecf77e6011"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.101640 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 05:03:43 crc kubenswrapper[4832]: E0131 05:03:43.102187 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e019ace-3599-4661-8577-79ecf77e6011" containerName="init" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.102207 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e019ace-3599-4661-8577-79ecf77e6011" containerName="init" Jan 31 05:03:43 crc kubenswrapper[4832]: E0131 05:03:43.102233 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9897d32-fbf9-4266-ae7e-2c9ec76b65c5" containerName="nova-manage" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.102242 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9897d32-fbf9-4266-ae7e-2c9ec76b65c5" containerName="nova-manage" Jan 31 05:03:43 crc kubenswrapper[4832]: E0131 05:03:43.102266 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1348b78a-eddf-4b18-b3ab-3aa70968678f" containerName="nova-cell1-conductor-db-sync" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.102276 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1348b78a-eddf-4b18-b3ab-3aa70968678f" containerName="nova-cell1-conductor-db-sync" Jan 31 05:03:43 crc kubenswrapper[4832]: E0131 05:03:43.102292 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e019ace-3599-4661-8577-79ecf77e6011" containerName="dnsmasq-dns" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.102299 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e019ace-3599-4661-8577-79ecf77e6011" containerName="dnsmasq-dns" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.102475 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9897d32-fbf9-4266-ae7e-2c9ec76b65c5" containerName="nova-manage" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.102497 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e019ace-3599-4661-8577-79ecf77e6011" containerName="dnsmasq-dns" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.102506 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1348b78a-eddf-4b18-b3ab-3aa70968678f" containerName="nova-cell1-conductor-db-sync" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.103309 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.111116 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.112490 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8e019ace-3599-4661-8577-79ecf77e6011" (UID: "8e019ace-3599-4661-8577-79ecf77e6011"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.113267 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.113306 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x24k6\" (UniqueName: \"kubernetes.io/projected/8e019ace-3599-4661-8577-79ecf77e6011-kube-api-access-x24k6\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.113317 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.130861 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8e019ace-3599-4661-8577-79ecf77e6011" (UID: "8e019ace-3599-4661-8577-79ecf77e6011"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.141065 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.148185 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e019ace-3599-4661-8577-79ecf77e6011" (UID: "8e019ace-3599-4661-8577-79ecf77e6011"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.161600 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-config" (OuterVolumeSpecName: "config") pod "8e019ace-3599-4661-8577-79ecf77e6011" (UID: "8e019ace-3599-4661-8577-79ecf77e6011"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.216421 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mh2s\" (UniqueName: \"kubernetes.io/projected/d6ec6693-e464-4258-a3fa-fef2b9c97bae-kube-api-access-5mh2s\") pod \"nova-cell1-conductor-0\" (UID: \"d6ec6693-e464-4258-a3fa-fef2b9c97bae\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.216885 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ec6693-e464-4258-a3fa-fef2b9c97bae-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d6ec6693-e464-4258-a3fa-fef2b9c97bae\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.217008 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ec6693-e464-4258-a3fa-fef2b9c97bae-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d6ec6693-e464-4258-a3fa-fef2b9c97bae\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.217191 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.217265 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.217327 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e019ace-3599-4661-8577-79ecf77e6011-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.222158 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.222496 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2873498f-86d9-4ce6-ba69-bb2585471123" containerName="nova-api-log" containerID="cri-o://08d9e1da07c7fe1522a870de3852911209907f999af27db9911fe69ab7fdb037" gracePeriod=30 Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.222721 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2873498f-86d9-4ce6-ba69-bb2585471123" containerName="nova-api-api" containerID="cri-o://9c220251c55b8e3f7666265ed0a32ba018a88be0f74787678be470474106d09b" gracePeriod=30 Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.237985 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.267817 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.268143 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="78fe71af-9855-460b-ac9a-dcc7524704c1" containerName="nova-metadata-log" containerID="cri-o://bbee4edd05ca5486a12bca6d397b72419cea58b2490378a559946cdf7145a6e6" gracePeriod=30 Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.268223 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="78fe71af-9855-460b-ac9a-dcc7524704c1" containerName="nova-metadata-metadata" containerID="cri-o://d9f5bd38b352b896a0d1dc94bbe071055a23297f41d8731466a739fb6699cdd1" gracePeriod=30 Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.318881 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mh2s\" (UniqueName: \"kubernetes.io/projected/d6ec6693-e464-4258-a3fa-fef2b9c97bae-kube-api-access-5mh2s\") pod \"nova-cell1-conductor-0\" (UID: \"d6ec6693-e464-4258-a3fa-fef2b9c97bae\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.319058 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ec6693-e464-4258-a3fa-fef2b9c97bae-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d6ec6693-e464-4258-a3fa-fef2b9c97bae\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.319101 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ec6693-e464-4258-a3fa-fef2b9c97bae-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d6ec6693-e464-4258-a3fa-fef2b9c97bae\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.322347 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.322452 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.324403 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ec6693-e464-4258-a3fa-fef2b9c97bae-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d6ec6693-e464-4258-a3fa-fef2b9c97bae\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.324977 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6ec6693-e464-4258-a3fa-fef2b9c97bae-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d6ec6693-e464-4258-a3fa-fef2b9c97bae\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.342775 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mh2s\" (UniqueName: \"kubernetes.io/projected/d6ec6693-e464-4258-a3fa-fef2b9c97bae-kube-api-access-5mh2s\") pod \"nova-cell1-conductor-0\" (UID: \"d6ec6693-e464-4258-a3fa-fef2b9c97bae\") " pod="openstack/nova-cell1-conductor-0" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.369870 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-b2hqg"] Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.383025 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-b2hqg"] Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.434468 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 05:03:43 crc kubenswrapper[4832]: I0131 05:03:43.875832 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e019ace-3599-4661-8577-79ecf77e6011" path="/var/lib/kubelet/pods/8e019ace-3599-4661-8577-79ecf77e6011/volumes" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.031898 4832 generic.go:334] "Generic (PLEG): container finished" podID="2873498f-86d9-4ce6-ba69-bb2585471123" containerID="08d9e1da07c7fe1522a870de3852911209907f999af27db9911fe69ab7fdb037" exitCode=143 Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.031971 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2873498f-86d9-4ce6-ba69-bb2585471123","Type":"ContainerDied","Data":"08d9e1da07c7fe1522a870de3852911209907f999af27db9911fe69ab7fdb037"} Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.033010 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.034942 4832 generic.go:334] "Generic (PLEG): container finished" podID="78fe71af-9855-460b-ac9a-dcc7524704c1" containerID="d9f5bd38b352b896a0d1dc94bbe071055a23297f41d8731466a739fb6699cdd1" exitCode=0 Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.034965 4832 generic.go:334] "Generic (PLEG): container finished" podID="78fe71af-9855-460b-ac9a-dcc7524704c1" containerID="bbee4edd05ca5486a12bca6d397b72419cea58b2490378a559946cdf7145a6e6" exitCode=143 Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.035814 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.035961 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78fe71af-9855-460b-ac9a-dcc7524704c1","Type":"ContainerDied","Data":"d9f5bd38b352b896a0d1dc94bbe071055a23297f41d8731466a739fb6699cdd1"} Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.035983 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78fe71af-9855-460b-ac9a-dcc7524704c1","Type":"ContainerDied","Data":"bbee4edd05ca5486a12bca6d397b72419cea58b2490378a559946cdf7145a6e6"} Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.036001 4832 scope.go:117] "RemoveContainer" containerID="d9f5bd38b352b896a0d1dc94bbe071055a23297f41d8731466a739fb6699cdd1" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.065373 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 05:03:44 crc kubenswrapper[4832]: W0131 05:03:44.090547 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6ec6693_e464_4258_a3fa_fef2b9c97bae.slice/crio-8c828ce35b0b5b3a88f78d6625132c9e107978c3ee34fe33c7f1c5cc69dc03b7 WatchSource:0}: Error finding container 8c828ce35b0b5b3a88f78d6625132c9e107978c3ee34fe33c7f1c5cc69dc03b7: Status 404 returned error can't find the container with id 8c828ce35b0b5b3a88f78d6625132c9e107978c3ee34fe33c7f1c5cc69dc03b7 Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.093538 4832 scope.go:117] "RemoveContainer" containerID="bbee4edd05ca5486a12bca6d397b72419cea58b2490378a559946cdf7145a6e6" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.130909 4832 scope.go:117] "RemoveContainer" containerID="d9f5bd38b352b896a0d1dc94bbe071055a23297f41d8731466a739fb6699cdd1" Jan 31 05:03:44 crc kubenswrapper[4832]: E0131 05:03:44.131896 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9f5bd38b352b896a0d1dc94bbe071055a23297f41d8731466a739fb6699cdd1\": container with ID starting with d9f5bd38b352b896a0d1dc94bbe071055a23297f41d8731466a739fb6699cdd1 not found: ID does not exist" containerID="d9f5bd38b352b896a0d1dc94bbe071055a23297f41d8731466a739fb6699cdd1" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.131928 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9f5bd38b352b896a0d1dc94bbe071055a23297f41d8731466a739fb6699cdd1"} err="failed to get container status \"d9f5bd38b352b896a0d1dc94bbe071055a23297f41d8731466a739fb6699cdd1\": rpc error: code = NotFound desc = could not find container \"d9f5bd38b352b896a0d1dc94bbe071055a23297f41d8731466a739fb6699cdd1\": container with ID starting with d9f5bd38b352b896a0d1dc94bbe071055a23297f41d8731466a739fb6699cdd1 not found: ID does not exist" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.131952 4832 scope.go:117] "RemoveContainer" containerID="bbee4edd05ca5486a12bca6d397b72419cea58b2490378a559946cdf7145a6e6" Jan 31 05:03:44 crc kubenswrapper[4832]: E0131 05:03:44.132178 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbee4edd05ca5486a12bca6d397b72419cea58b2490378a559946cdf7145a6e6\": container with ID starting with bbee4edd05ca5486a12bca6d397b72419cea58b2490378a559946cdf7145a6e6 not found: ID does not exist" containerID="bbee4edd05ca5486a12bca6d397b72419cea58b2490378a559946cdf7145a6e6" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.132206 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbee4edd05ca5486a12bca6d397b72419cea58b2490378a559946cdf7145a6e6"} err="failed to get container status \"bbee4edd05ca5486a12bca6d397b72419cea58b2490378a559946cdf7145a6e6\": rpc error: code = NotFound desc = could not find container \"bbee4edd05ca5486a12bca6d397b72419cea58b2490378a559946cdf7145a6e6\": container with ID starting with bbee4edd05ca5486a12bca6d397b72419cea58b2490378a559946cdf7145a6e6 not found: ID does not exist" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.146355 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7skt\" (UniqueName: \"kubernetes.io/projected/78fe71af-9855-460b-ac9a-dcc7524704c1-kube-api-access-n7skt\") pod \"78fe71af-9855-460b-ac9a-dcc7524704c1\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.146779 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78fe71af-9855-460b-ac9a-dcc7524704c1-logs\") pod \"78fe71af-9855-460b-ac9a-dcc7524704c1\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.146832 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-nova-metadata-tls-certs\") pod \"78fe71af-9855-460b-ac9a-dcc7524704c1\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.146924 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-config-data\") pod \"78fe71af-9855-460b-ac9a-dcc7524704c1\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.146971 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-combined-ca-bundle\") pod \"78fe71af-9855-460b-ac9a-dcc7524704c1\" (UID: \"78fe71af-9855-460b-ac9a-dcc7524704c1\") " Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.150629 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78fe71af-9855-460b-ac9a-dcc7524704c1-logs" (OuterVolumeSpecName: "logs") pod "78fe71af-9855-460b-ac9a-dcc7524704c1" (UID: "78fe71af-9855-460b-ac9a-dcc7524704c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.152403 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78fe71af-9855-460b-ac9a-dcc7524704c1-kube-api-access-n7skt" (OuterVolumeSpecName: "kube-api-access-n7skt") pod "78fe71af-9855-460b-ac9a-dcc7524704c1" (UID: "78fe71af-9855-460b-ac9a-dcc7524704c1"). InnerVolumeSpecName "kube-api-access-n7skt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.185020 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78fe71af-9855-460b-ac9a-dcc7524704c1" (UID: "78fe71af-9855-460b-ac9a-dcc7524704c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.188502 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-config-data" (OuterVolumeSpecName: "config-data") pod "78fe71af-9855-460b-ac9a-dcc7524704c1" (UID: "78fe71af-9855-460b-ac9a-dcc7524704c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.233484 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "78fe71af-9855-460b-ac9a-dcc7524704c1" (UID: "78fe71af-9855-460b-ac9a-dcc7524704c1"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.249591 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78fe71af-9855-460b-ac9a-dcc7524704c1-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.249626 4832 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.249644 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.249658 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fe71af-9855-460b-ac9a-dcc7524704c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.249668 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7skt\" (UniqueName: \"kubernetes.io/projected/78fe71af-9855-460b-ac9a-dcc7524704c1-kube-api-access-n7skt\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.374988 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.421930 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.437812 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:03:44 crc kubenswrapper[4832]: E0131 05:03:44.438487 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fe71af-9855-460b-ac9a-dcc7524704c1" containerName="nova-metadata-metadata" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.438569 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fe71af-9855-460b-ac9a-dcc7524704c1" containerName="nova-metadata-metadata" Jan 31 05:03:44 crc kubenswrapper[4832]: E0131 05:03:44.438631 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fe71af-9855-460b-ac9a-dcc7524704c1" containerName="nova-metadata-log" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.438682 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fe71af-9855-460b-ac9a-dcc7524704c1" containerName="nova-metadata-log" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.438983 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="78fe71af-9855-460b-ac9a-dcc7524704c1" containerName="nova-metadata-metadata" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.439055 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="78fe71af-9855-460b-ac9a-dcc7524704c1" containerName="nova-metadata-log" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.440329 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.446313 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.446680 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.461282 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.560068 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgtfk\" (UniqueName: \"kubernetes.io/projected/e7983198-c069-49b9-aad7-18fa9e2923df-kube-api-access-pgtfk\") pod \"nova-metadata-0\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.560183 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7983198-c069-49b9-aad7-18fa9e2923df-logs\") pod \"nova-metadata-0\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.560290 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.560345 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-config-data\") pod \"nova-metadata-0\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.560455 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.662765 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7983198-c069-49b9-aad7-18fa9e2923df-logs\") pod \"nova-metadata-0\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.662943 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.663012 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-config-data\") pod \"nova-metadata-0\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.663161 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.663249 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgtfk\" (UniqueName: \"kubernetes.io/projected/e7983198-c069-49b9-aad7-18fa9e2923df-kube-api-access-pgtfk\") pod \"nova-metadata-0\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.663521 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7983198-c069-49b9-aad7-18fa9e2923df-logs\") pod \"nova-metadata-0\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.677541 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.677610 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.678355 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-config-data\") pod \"nova-metadata-0\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.702053 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgtfk\" (UniqueName: \"kubernetes.io/projected/e7983198-c069-49b9-aad7-18fa9e2923df-kube-api-access-pgtfk\") pod \"nova-metadata-0\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " pod="openstack/nova-metadata-0" Jan 31 05:03:44 crc kubenswrapper[4832]: I0131 05:03:44.771660 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:03:45 crc kubenswrapper[4832]: I0131 05:03:45.065036 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d6ec6693-e464-4258-a3fa-fef2b9c97bae","Type":"ContainerStarted","Data":"f810aeffce40b4fe606b66de41f6ce8333d6ceec436e58151b96e176fd00329e"} Jan 31 05:03:45 crc kubenswrapper[4832]: I0131 05:03:45.065099 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d6ec6693-e464-4258-a3fa-fef2b9c97bae","Type":"ContainerStarted","Data":"8c828ce35b0b5b3a88f78d6625132c9e107978c3ee34fe33c7f1c5cc69dc03b7"} Jan 31 05:03:45 crc kubenswrapper[4832]: I0131 05:03:45.065112 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0ab45d0f-8c40-4f39-b885-0579260c0763" containerName="nova-scheduler-scheduler" containerID="cri-o://8013601b4c6ba6508d5f07c21ab0442c4b1802eec7e30db1caa825d6100ee396" gracePeriod=30 Jan 31 05:03:45 crc kubenswrapper[4832]: I0131 05:03:45.065912 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 31 05:03:45 crc kubenswrapper[4832]: I0131 05:03:45.101410 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.101387196 podStartE2EDuration="2.101387196s" podCreationTimestamp="2026-01-31 05:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:03:45.088874737 +0000 UTC m=+1234.037696422" watchObservedRunningTime="2026-01-31 05:03:45.101387196 +0000 UTC m=+1234.050208871" Jan 31 05:03:45 crc kubenswrapper[4832]: I0131 05:03:45.292012 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:03:45 crc kubenswrapper[4832]: I0131 05:03:45.877987 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78fe71af-9855-460b-ac9a-dcc7524704c1" path="/var/lib/kubelet/pods/78fe71af-9855-460b-ac9a-dcc7524704c1/volumes" Jan 31 05:03:46 crc kubenswrapper[4832]: I0131 05:03:46.076301 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7983198-c069-49b9-aad7-18fa9e2923df","Type":"ContainerStarted","Data":"bdcf5b3e67cc08a88be96a740cd87029e3d9ef7e02f9ae8f19161679b1ac6374"} Jan 31 05:03:46 crc kubenswrapper[4832]: I0131 05:03:46.076381 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7983198-c069-49b9-aad7-18fa9e2923df","Type":"ContainerStarted","Data":"d35fa820c2d7ff7316dd790325fc573ff5f45193a6b85ae0a592c072f1f188d4"} Jan 31 05:03:46 crc kubenswrapper[4832]: I0131 05:03:46.076400 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7983198-c069-49b9-aad7-18fa9e2923df","Type":"ContainerStarted","Data":"d0e21c5e1ff5262d2b99a38204019773fab2d5de4fed477ce49ea36f15df3609"} Jan 31 05:03:46 crc kubenswrapper[4832]: I0131 05:03:46.096177 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.096149521 podStartE2EDuration="2.096149521s" podCreationTimestamp="2026-01-31 05:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:03:46.094480399 +0000 UTC m=+1235.043302114" watchObservedRunningTime="2026-01-31 05:03:46.096149521 +0000 UTC m=+1235.044971206" Jan 31 05:03:46 crc kubenswrapper[4832]: E0131 05:03:46.807203 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8013601b4c6ba6508d5f07c21ab0442c4b1802eec7e30db1caa825d6100ee396" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 05:03:46 crc kubenswrapper[4832]: E0131 05:03:46.810370 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8013601b4c6ba6508d5f07c21ab0442c4b1802eec7e30db1caa825d6100ee396" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 05:03:46 crc kubenswrapper[4832]: E0131 05:03:46.812582 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8013601b4c6ba6508d5f07c21ab0442c4b1802eec7e30db1caa825d6100ee396" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 05:03:46 crc kubenswrapper[4832]: E0131 05:03:46.812625 4832 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0ab45d0f-8c40-4f39-b885-0579260c0763" containerName="nova-scheduler-scheduler" Jan 31 05:03:48 crc kubenswrapper[4832]: I0131 05:03:48.140100 4832 generic.go:334] "Generic (PLEG): container finished" podID="0ab45d0f-8c40-4f39-b885-0579260c0763" containerID="8013601b4c6ba6508d5f07c21ab0442c4b1802eec7e30db1caa825d6100ee396" exitCode=0 Jan 31 05:03:48 crc kubenswrapper[4832]: I0131 05:03:48.140711 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0ab45d0f-8c40-4f39-b885-0579260c0763","Type":"ContainerDied","Data":"8013601b4c6ba6508d5f07c21ab0442c4b1802eec7e30db1caa825d6100ee396"} Jan 31 05:03:48 crc kubenswrapper[4832]: I0131 05:03:48.377396 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:03:48 crc kubenswrapper[4832]: I0131 05:03:48.470927 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab45d0f-8c40-4f39-b885-0579260c0763-config-data\") pod \"0ab45d0f-8c40-4f39-b885-0579260c0763\" (UID: \"0ab45d0f-8c40-4f39-b885-0579260c0763\") " Jan 31 05:03:48 crc kubenswrapper[4832]: I0131 05:03:48.471064 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab45d0f-8c40-4f39-b885-0579260c0763-combined-ca-bundle\") pod \"0ab45d0f-8c40-4f39-b885-0579260c0763\" (UID: \"0ab45d0f-8c40-4f39-b885-0579260c0763\") " Jan 31 05:03:48 crc kubenswrapper[4832]: I0131 05:03:48.471185 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw26s\" (UniqueName: \"kubernetes.io/projected/0ab45d0f-8c40-4f39-b885-0579260c0763-kube-api-access-mw26s\") pod \"0ab45d0f-8c40-4f39-b885-0579260c0763\" (UID: \"0ab45d0f-8c40-4f39-b885-0579260c0763\") " Jan 31 05:03:48 crc kubenswrapper[4832]: I0131 05:03:48.482157 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab45d0f-8c40-4f39-b885-0579260c0763-kube-api-access-mw26s" (OuterVolumeSpecName: "kube-api-access-mw26s") pod "0ab45d0f-8c40-4f39-b885-0579260c0763" (UID: "0ab45d0f-8c40-4f39-b885-0579260c0763"). InnerVolumeSpecName "kube-api-access-mw26s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:03:48 crc kubenswrapper[4832]: I0131 05:03:48.518312 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab45d0f-8c40-4f39-b885-0579260c0763-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ab45d0f-8c40-4f39-b885-0579260c0763" (UID: "0ab45d0f-8c40-4f39-b885-0579260c0763"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:48 crc kubenswrapper[4832]: I0131 05:03:48.536234 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab45d0f-8c40-4f39-b885-0579260c0763-config-data" (OuterVolumeSpecName: "config-data") pod "0ab45d0f-8c40-4f39-b885-0579260c0763" (UID: "0ab45d0f-8c40-4f39-b885-0579260c0763"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:48 crc kubenswrapper[4832]: I0131 05:03:48.573838 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab45d0f-8c40-4f39-b885-0579260c0763-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:48 crc kubenswrapper[4832]: I0131 05:03:48.573889 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw26s\" (UniqueName: \"kubernetes.io/projected/0ab45d0f-8c40-4f39-b885-0579260c0763-kube-api-access-mw26s\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:48 crc kubenswrapper[4832]: I0131 05:03:48.573912 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab45d0f-8c40-4f39-b885-0579260c0763-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.156906 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0ab45d0f-8c40-4f39-b885-0579260c0763","Type":"ContainerDied","Data":"238fdaac1d09a6654602498c1425aca83924b161c74155e234238d8ae38e41e3"} Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.156992 4832 scope.go:117] "RemoveContainer" containerID="8013601b4c6ba6508d5f07c21ab0442c4b1802eec7e30db1caa825d6100ee396" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.156918 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.159450 4832 generic.go:334] "Generic (PLEG): container finished" podID="2873498f-86d9-4ce6-ba69-bb2585471123" containerID="9c220251c55b8e3f7666265ed0a32ba018a88be0f74787678be470474106d09b" exitCode=0 Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.159498 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2873498f-86d9-4ce6-ba69-bb2585471123","Type":"ContainerDied","Data":"9c220251c55b8e3f7666265ed0a32ba018a88be0f74787678be470474106d09b"} Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.325768 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.337418 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.355393 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.374706 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:03:49 crc kubenswrapper[4832]: E0131 05:03:49.375525 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2873498f-86d9-4ce6-ba69-bb2585471123" containerName="nova-api-log" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.375583 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2873498f-86d9-4ce6-ba69-bb2585471123" containerName="nova-api-log" Jan 31 05:03:49 crc kubenswrapper[4832]: E0131 05:03:49.375638 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab45d0f-8c40-4f39-b885-0579260c0763" containerName="nova-scheduler-scheduler" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.375651 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab45d0f-8c40-4f39-b885-0579260c0763" containerName="nova-scheduler-scheduler" Jan 31 05:03:49 crc kubenswrapper[4832]: E0131 05:03:49.375674 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2873498f-86d9-4ce6-ba69-bb2585471123" containerName="nova-api-api" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.375687 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2873498f-86d9-4ce6-ba69-bb2585471123" containerName="nova-api-api" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.376070 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2873498f-86d9-4ce6-ba69-bb2585471123" containerName="nova-api-api" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.376126 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab45d0f-8c40-4f39-b885-0579260c0763" containerName="nova-scheduler-scheduler" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.376222 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2873498f-86d9-4ce6-ba69-bb2585471123" containerName="nova-api-log" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.377483 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.380892 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.394107 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.496507 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk8tx\" (UniqueName: \"kubernetes.io/projected/2873498f-86d9-4ce6-ba69-bb2585471123-kube-api-access-xk8tx\") pod \"2873498f-86d9-4ce6-ba69-bb2585471123\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.496670 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2873498f-86d9-4ce6-ba69-bb2585471123-combined-ca-bundle\") pod \"2873498f-86d9-4ce6-ba69-bb2585471123\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.496714 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2873498f-86d9-4ce6-ba69-bb2585471123-logs\") pod \"2873498f-86d9-4ce6-ba69-bb2585471123\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.496920 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2873498f-86d9-4ce6-ba69-bb2585471123-config-data\") pod \"2873498f-86d9-4ce6-ba69-bb2585471123\" (UID: \"2873498f-86d9-4ce6-ba69-bb2585471123\") " Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.497259 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81a32fb3-95af-439e-a95f-f35476a2cc15-config-data\") pod \"nova-scheduler-0\" (UID: \"81a32fb3-95af-439e-a95f-f35476a2cc15\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.497360 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2873498f-86d9-4ce6-ba69-bb2585471123-logs" (OuterVolumeSpecName: "logs") pod "2873498f-86d9-4ce6-ba69-bb2585471123" (UID: "2873498f-86d9-4ce6-ba69-bb2585471123"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.497384 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a32fb3-95af-439e-a95f-f35476a2cc15-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"81a32fb3-95af-439e-a95f-f35476a2cc15\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.497475 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-952vm\" (UniqueName: \"kubernetes.io/projected/81a32fb3-95af-439e-a95f-f35476a2cc15-kube-api-access-952vm\") pod \"nova-scheduler-0\" (UID: \"81a32fb3-95af-439e-a95f-f35476a2cc15\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.497860 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2873498f-86d9-4ce6-ba69-bb2585471123-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.502747 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2873498f-86d9-4ce6-ba69-bb2585471123-kube-api-access-xk8tx" (OuterVolumeSpecName: "kube-api-access-xk8tx") pod "2873498f-86d9-4ce6-ba69-bb2585471123" (UID: "2873498f-86d9-4ce6-ba69-bb2585471123"). InnerVolumeSpecName "kube-api-access-xk8tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.525755 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2873498f-86d9-4ce6-ba69-bb2585471123-config-data" (OuterVolumeSpecName: "config-data") pod "2873498f-86d9-4ce6-ba69-bb2585471123" (UID: "2873498f-86d9-4ce6-ba69-bb2585471123"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.532619 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2873498f-86d9-4ce6-ba69-bb2585471123-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2873498f-86d9-4ce6-ba69-bb2585471123" (UID: "2873498f-86d9-4ce6-ba69-bb2585471123"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.600005 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a32fb3-95af-439e-a95f-f35476a2cc15-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"81a32fb3-95af-439e-a95f-f35476a2cc15\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.600065 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-952vm\" (UniqueName: \"kubernetes.io/projected/81a32fb3-95af-439e-a95f-f35476a2cc15-kube-api-access-952vm\") pod \"nova-scheduler-0\" (UID: \"81a32fb3-95af-439e-a95f-f35476a2cc15\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.600146 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81a32fb3-95af-439e-a95f-f35476a2cc15-config-data\") pod \"nova-scheduler-0\" (UID: \"81a32fb3-95af-439e-a95f-f35476a2cc15\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.600250 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2873498f-86d9-4ce6-ba69-bb2585471123-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.600263 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2873498f-86d9-4ce6-ba69-bb2585471123-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.600272 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk8tx\" (UniqueName: \"kubernetes.io/projected/2873498f-86d9-4ce6-ba69-bb2585471123-kube-api-access-xk8tx\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.604288 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81a32fb3-95af-439e-a95f-f35476a2cc15-config-data\") pod \"nova-scheduler-0\" (UID: \"81a32fb3-95af-439e-a95f-f35476a2cc15\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.605654 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a32fb3-95af-439e-a95f-f35476a2cc15-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"81a32fb3-95af-439e-a95f-f35476a2cc15\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.623184 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-952vm\" (UniqueName: \"kubernetes.io/projected/81a32fb3-95af-439e-a95f-f35476a2cc15-kube-api-access-952vm\") pod \"nova-scheduler-0\" (UID: \"81a32fb3-95af-439e-a95f-f35476a2cc15\") " pod="openstack/nova-scheduler-0" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.703845 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.772633 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.772929 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 05:03:49 crc kubenswrapper[4832]: I0131 05:03:49.878832 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab45d0f-8c40-4f39-b885-0579260c0763" path="/var/lib/kubelet/pods/0ab45d0f-8c40-4f39-b885-0579260c0763/volumes" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.031419 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.174634 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2873498f-86d9-4ce6-ba69-bb2585471123","Type":"ContainerDied","Data":"b46a8cd67f0028f9ab126fa5a404b9324adf8d018c1dc893221de7cdea179f64"} Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.174719 4832 scope.go:117] "RemoveContainer" containerID="9c220251c55b8e3f7666265ed0a32ba018a88be0f74787678be470474106d09b" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.174657 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.210204 4832 scope.go:117] "RemoveContainer" containerID="08d9e1da07c7fe1522a870de3852911209907f999af27db9911fe69ab7fdb037" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.221211 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.233487 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.243666 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.245689 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.248372 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.252426 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.269676 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:03:50 crc kubenswrapper[4832]: W0131 05:03:50.277422 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81a32fb3_95af_439e_a95f_f35476a2cc15.slice/crio-128da660b9523c8f173d597630684d00850a0b084557cc5ade84097c7d43cfe1 WatchSource:0}: Error finding container 128da660b9523c8f173d597630684d00850a0b084557cc5ade84097c7d43cfe1: Status 404 returned error can't find the container with id 128da660b9523c8f173d597630684d00850a0b084557cc5ade84097c7d43cfe1 Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.418479 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6da850fe-027d-4ad5-b024-f3f81dd1aa20-logs\") pod \"nova-api-0\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " pod="openstack/nova-api-0" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.419079 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sck9c\" (UniqueName: \"kubernetes.io/projected/6da850fe-027d-4ad5-b024-f3f81dd1aa20-kube-api-access-sck9c\") pod \"nova-api-0\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " pod="openstack/nova-api-0" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.419156 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da850fe-027d-4ad5-b024-f3f81dd1aa20-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " pod="openstack/nova-api-0" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.419220 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da850fe-027d-4ad5-b024-f3f81dd1aa20-config-data\") pod \"nova-api-0\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " pod="openstack/nova-api-0" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.521368 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sck9c\" (UniqueName: \"kubernetes.io/projected/6da850fe-027d-4ad5-b024-f3f81dd1aa20-kube-api-access-sck9c\") pod \"nova-api-0\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " pod="openstack/nova-api-0" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.521427 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da850fe-027d-4ad5-b024-f3f81dd1aa20-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " pod="openstack/nova-api-0" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.521455 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da850fe-027d-4ad5-b024-f3f81dd1aa20-config-data\") pod \"nova-api-0\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " pod="openstack/nova-api-0" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.521522 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6da850fe-027d-4ad5-b024-f3f81dd1aa20-logs\") pod \"nova-api-0\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " pod="openstack/nova-api-0" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.521962 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6da850fe-027d-4ad5-b024-f3f81dd1aa20-logs\") pod \"nova-api-0\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " pod="openstack/nova-api-0" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.529405 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da850fe-027d-4ad5-b024-f3f81dd1aa20-config-data\") pod \"nova-api-0\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " pod="openstack/nova-api-0" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.530028 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da850fe-027d-4ad5-b024-f3f81dd1aa20-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " pod="openstack/nova-api-0" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.544962 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sck9c\" (UniqueName: \"kubernetes.io/projected/6da850fe-027d-4ad5-b024-f3f81dd1aa20-kube-api-access-sck9c\") pod \"nova-api-0\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " pod="openstack/nova-api-0" Jan 31 05:03:50 crc kubenswrapper[4832]: I0131 05:03:50.573627 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:03:51 crc kubenswrapper[4832]: W0131 05:03:51.118875 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6da850fe_027d_4ad5_b024_f3f81dd1aa20.slice/crio-7f41e16ad5ad690c7e27be8b37dc8e3bf0768f821d5d8e690a6bdd23866a9e95 WatchSource:0}: Error finding container 7f41e16ad5ad690c7e27be8b37dc8e3bf0768f821d5d8e690a6bdd23866a9e95: Status 404 returned error can't find the container with id 7f41e16ad5ad690c7e27be8b37dc8e3bf0768f821d5d8e690a6bdd23866a9e95 Jan 31 05:03:51 crc kubenswrapper[4832]: I0131 05:03:51.121712 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:03:51 crc kubenswrapper[4832]: I0131 05:03:51.194718 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6da850fe-027d-4ad5-b024-f3f81dd1aa20","Type":"ContainerStarted","Data":"7f41e16ad5ad690c7e27be8b37dc8e3bf0768f821d5d8e690a6bdd23866a9e95"} Jan 31 05:03:51 crc kubenswrapper[4832]: I0131 05:03:51.197995 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"81a32fb3-95af-439e-a95f-f35476a2cc15","Type":"ContainerStarted","Data":"91daa3ddf19ba16a465ba969ea504c4b70010d3f3efc2e8d1ffa129fcf7a3ff4"} Jan 31 05:03:51 crc kubenswrapper[4832]: I0131 05:03:51.198093 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"81a32fb3-95af-439e-a95f-f35476a2cc15","Type":"ContainerStarted","Data":"128da660b9523c8f173d597630684d00850a0b084557cc5ade84097c7d43cfe1"} Jan 31 05:03:51 crc kubenswrapper[4832]: I0131 05:03:51.222031 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.222005973 podStartE2EDuration="2.222005973s" podCreationTimestamp="2026-01-31 05:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:03:51.214905672 +0000 UTC m=+1240.163727367" watchObservedRunningTime="2026-01-31 05:03:51.222005973 +0000 UTC m=+1240.170827668" Jan 31 05:03:51 crc kubenswrapper[4832]: I0131 05:03:51.878625 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2873498f-86d9-4ce6-ba69-bb2585471123" path="/var/lib/kubelet/pods/2873498f-86d9-4ce6-ba69-bb2585471123/volumes" Jan 31 05:03:52 crc kubenswrapper[4832]: I0131 05:03:52.210359 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6da850fe-027d-4ad5-b024-f3f81dd1aa20","Type":"ContainerStarted","Data":"a05c850c3c6dac0bd7ebb1b44140926de3ae838481fc7456a9d1f455c85be48a"} Jan 31 05:03:52 crc kubenswrapper[4832]: I0131 05:03:52.211070 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6da850fe-027d-4ad5-b024-f3f81dd1aa20","Type":"ContainerStarted","Data":"0cdac3f90ff49524962f63f894017cf54348b2b08d7a575968c0bafe41ad856b"} Jan 31 05:03:52 crc kubenswrapper[4832]: I0131 05:03:52.240270 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.240248748 podStartE2EDuration="2.240248748s" podCreationTimestamp="2026-01-31 05:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:03:52.237794501 +0000 UTC m=+1241.186616226" watchObservedRunningTime="2026-01-31 05:03:52.240248748 +0000 UTC m=+1241.189070433" Jan 31 05:03:53 crc kubenswrapper[4832]: I0131 05:03:53.488094 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 31 05:03:54 crc kubenswrapper[4832]: I0131 05:03:54.241541 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 05:03:54 crc kubenswrapper[4832]: I0131 05:03:54.241930 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ce8cc6a9-9cd5-410b-93df-1f132291a0ea" containerName="kube-state-metrics" containerID="cri-o://67dee97a6b47d42e297154f13cf5cebf76f0789befce30e0be6ccd99eab46874" gracePeriod=30 Jan 31 05:03:54 crc kubenswrapper[4832]: I0131 05:03:54.706366 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 05:03:54 crc kubenswrapper[4832]: I0131 05:03:54.772197 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 05:03:54 crc kubenswrapper[4832]: I0131 05:03:54.772274 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 05:03:54 crc kubenswrapper[4832]: I0131 05:03:54.845900 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 05:03:54 crc kubenswrapper[4832]: I0131 05:03:54.931534 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbgk9\" (UniqueName: \"kubernetes.io/projected/ce8cc6a9-9cd5-410b-93df-1f132291a0ea-kube-api-access-hbgk9\") pod \"ce8cc6a9-9cd5-410b-93df-1f132291a0ea\" (UID: \"ce8cc6a9-9cd5-410b-93df-1f132291a0ea\") " Jan 31 05:03:54 crc kubenswrapper[4832]: I0131 05:03:54.939298 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce8cc6a9-9cd5-410b-93df-1f132291a0ea-kube-api-access-hbgk9" (OuterVolumeSpecName: "kube-api-access-hbgk9") pod "ce8cc6a9-9cd5-410b-93df-1f132291a0ea" (UID: "ce8cc6a9-9cd5-410b-93df-1f132291a0ea"). InnerVolumeSpecName "kube-api-access-hbgk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.036188 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbgk9\" (UniqueName: \"kubernetes.io/projected/ce8cc6a9-9cd5-410b-93df-1f132291a0ea-kube-api-access-hbgk9\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.248413 4832 generic.go:334] "Generic (PLEG): container finished" podID="ce8cc6a9-9cd5-410b-93df-1f132291a0ea" containerID="67dee97a6b47d42e297154f13cf5cebf76f0789befce30e0be6ccd99eab46874" exitCode=2 Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.248475 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ce8cc6a9-9cd5-410b-93df-1f132291a0ea","Type":"ContainerDied","Data":"67dee97a6b47d42e297154f13cf5cebf76f0789befce30e0be6ccd99eab46874"} Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.248518 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ce8cc6a9-9cd5-410b-93df-1f132291a0ea","Type":"ContainerDied","Data":"703e5844cfd047cb33b6877290656c5a77f5e6b2fb99f5b1a799ce6152289964"} Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.248543 4832 scope.go:117] "RemoveContainer" containerID="67dee97a6b47d42e297154f13cf5cebf76f0789befce30e0be6ccd99eab46874" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.248737 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.292250 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.307588 4832 scope.go:117] "RemoveContainer" containerID="67dee97a6b47d42e297154f13cf5cebf76f0789befce30e0be6ccd99eab46874" Jan 31 05:03:55 crc kubenswrapper[4832]: E0131 05:03:55.308826 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67dee97a6b47d42e297154f13cf5cebf76f0789befce30e0be6ccd99eab46874\": container with ID starting with 67dee97a6b47d42e297154f13cf5cebf76f0789befce30e0be6ccd99eab46874 not found: ID does not exist" containerID="67dee97a6b47d42e297154f13cf5cebf76f0789befce30e0be6ccd99eab46874" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.309023 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67dee97a6b47d42e297154f13cf5cebf76f0789befce30e0be6ccd99eab46874"} err="failed to get container status \"67dee97a6b47d42e297154f13cf5cebf76f0789befce30e0be6ccd99eab46874\": rpc error: code = NotFound desc = could not find container \"67dee97a6b47d42e297154f13cf5cebf76f0789befce30e0be6ccd99eab46874\": container with ID starting with 67dee97a6b47d42e297154f13cf5cebf76f0789befce30e0be6ccd99eab46874 not found: ID does not exist" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.309839 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.323345 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 05:03:55 crc kubenswrapper[4832]: E0131 05:03:55.324064 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce8cc6a9-9cd5-410b-93df-1f132291a0ea" containerName="kube-state-metrics" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.324096 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce8cc6a9-9cd5-410b-93df-1f132291a0ea" containerName="kube-state-metrics" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.324380 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce8cc6a9-9cd5-410b-93df-1f132291a0ea" containerName="kube-state-metrics" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.325458 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.327268 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.327686 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.339267 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.446910 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca479b9-47c2-4c05-9b4c-dbde78e18be7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bca479b9-47c2-4c05-9b4c-dbde78e18be7\") " pod="openstack/kube-state-metrics-0" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.446969 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bca479b9-47c2-4c05-9b4c-dbde78e18be7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bca479b9-47c2-4c05-9b4c-dbde78e18be7\") " pod="openstack/kube-state-metrics-0" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.447057 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bca479b9-47c2-4c05-9b4c-dbde78e18be7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bca479b9-47c2-4c05-9b4c-dbde78e18be7\") " pod="openstack/kube-state-metrics-0" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.447103 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tr6s\" (UniqueName: \"kubernetes.io/projected/bca479b9-47c2-4c05-9b4c-dbde78e18be7-kube-api-access-9tr6s\") pod \"kube-state-metrics-0\" (UID: \"bca479b9-47c2-4c05-9b4c-dbde78e18be7\") " pod="openstack/kube-state-metrics-0" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.549088 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca479b9-47c2-4c05-9b4c-dbde78e18be7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bca479b9-47c2-4c05-9b4c-dbde78e18be7\") " pod="openstack/kube-state-metrics-0" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.549208 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bca479b9-47c2-4c05-9b4c-dbde78e18be7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bca479b9-47c2-4c05-9b4c-dbde78e18be7\") " pod="openstack/kube-state-metrics-0" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.549396 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bca479b9-47c2-4c05-9b4c-dbde78e18be7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bca479b9-47c2-4c05-9b4c-dbde78e18be7\") " pod="openstack/kube-state-metrics-0" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.549457 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tr6s\" (UniqueName: \"kubernetes.io/projected/bca479b9-47c2-4c05-9b4c-dbde78e18be7-kube-api-access-9tr6s\") pod \"kube-state-metrics-0\" (UID: \"bca479b9-47c2-4c05-9b4c-dbde78e18be7\") " pod="openstack/kube-state-metrics-0" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.557085 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bca479b9-47c2-4c05-9b4c-dbde78e18be7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bca479b9-47c2-4c05-9b4c-dbde78e18be7\") " pod="openstack/kube-state-metrics-0" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.558352 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bca479b9-47c2-4c05-9b4c-dbde78e18be7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bca479b9-47c2-4c05-9b4c-dbde78e18be7\") " pod="openstack/kube-state-metrics-0" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.586940 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca479b9-47c2-4c05-9b4c-dbde78e18be7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bca479b9-47c2-4c05-9b4c-dbde78e18be7\") " pod="openstack/kube-state-metrics-0" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.589552 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tr6s\" (UniqueName: \"kubernetes.io/projected/bca479b9-47c2-4c05-9b4c-dbde78e18be7-kube-api-access-9tr6s\") pod \"kube-state-metrics-0\" (UID: \"bca479b9-47c2-4c05-9b4c-dbde78e18be7\") " pod="openstack/kube-state-metrics-0" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.654200 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.796777 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e7983198-c069-49b9-aad7-18fa9e2923df" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.796880 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e7983198-c069-49b9-aad7-18fa9e2923df" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 05:03:55 crc kubenswrapper[4832]: I0131 05:03:55.871598 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce8cc6a9-9cd5-410b-93df-1f132291a0ea" path="/var/lib/kubelet/pods/ce8cc6a9-9cd5-410b-93df-1f132291a0ea/volumes" Jan 31 05:03:56 crc kubenswrapper[4832]: I0131 05:03:56.156137 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:56 crc kubenswrapper[4832]: I0131 05:03:56.156527 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="ceilometer-central-agent" containerID="cri-o://a2a7d673f85afd0cd5a894d6e7c5f70467f87f3b6b94d26016ff12c102341daa" gracePeriod=30 Jan 31 05:03:56 crc kubenswrapper[4832]: I0131 05:03:56.156682 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="proxy-httpd" containerID="cri-o://aa24381ecb1d33f2ca7298eb215fef61e0f307b8adafac52e41c3d684c074b6f" gracePeriod=30 Jan 31 05:03:56 crc kubenswrapper[4832]: I0131 05:03:56.156746 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="ceilometer-notification-agent" containerID="cri-o://abfeb396abe6feacb57b252ef44080f2f739993b4c1b9ecc72f5a90ad6a56477" gracePeriod=30 Jan 31 05:03:56 crc kubenswrapper[4832]: I0131 05:03:56.156734 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="sg-core" containerID="cri-o://93080b0a65d16d8d75e60a94c07fb0b0b88535cddd72bb85dbe06cf45121b08f" gracePeriod=30 Jan 31 05:03:56 crc kubenswrapper[4832]: I0131 05:03:56.198780 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 05:03:56 crc kubenswrapper[4832]: I0131 05:03:56.292670 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bca479b9-47c2-4c05-9b4c-dbde78e18be7","Type":"ContainerStarted","Data":"d6a021d729ac6c63cadd1639d39b2fe3f8969130b287d39c68b359de1c73571f"} Jan 31 05:03:57 crc kubenswrapper[4832]: I0131 05:03:57.301516 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bca479b9-47c2-4c05-9b4c-dbde78e18be7","Type":"ContainerStarted","Data":"c5386f45e7103eccd8ee9384bb8afa80d5b37bd9bc0f9e79ddcf2d2cde194164"} Jan 31 05:03:57 crc kubenswrapper[4832]: I0131 05:03:57.303345 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 05:03:57 crc kubenswrapper[4832]: I0131 05:03:57.306629 4832 generic.go:334] "Generic (PLEG): container finished" podID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerID="aa24381ecb1d33f2ca7298eb215fef61e0f307b8adafac52e41c3d684c074b6f" exitCode=0 Jan 31 05:03:57 crc kubenswrapper[4832]: I0131 05:03:57.306652 4832 generic.go:334] "Generic (PLEG): container finished" podID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerID="93080b0a65d16d8d75e60a94c07fb0b0b88535cddd72bb85dbe06cf45121b08f" exitCode=2 Jan 31 05:03:57 crc kubenswrapper[4832]: I0131 05:03:57.306660 4832 generic.go:334] "Generic (PLEG): container finished" podID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerID="a2a7d673f85afd0cd5a894d6e7c5f70467f87f3b6b94d26016ff12c102341daa" exitCode=0 Jan 31 05:03:57 crc kubenswrapper[4832]: I0131 05:03:57.306677 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1631bf1-d29f-41d9-a70b-4441863ac0fb","Type":"ContainerDied","Data":"aa24381ecb1d33f2ca7298eb215fef61e0f307b8adafac52e41c3d684c074b6f"} Jan 31 05:03:57 crc kubenswrapper[4832]: I0131 05:03:57.306695 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1631bf1-d29f-41d9-a70b-4441863ac0fb","Type":"ContainerDied","Data":"93080b0a65d16d8d75e60a94c07fb0b0b88535cddd72bb85dbe06cf45121b08f"} Jan 31 05:03:57 crc kubenswrapper[4832]: I0131 05:03:57.306705 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1631bf1-d29f-41d9-a70b-4441863ac0fb","Type":"ContainerDied","Data":"a2a7d673f85afd0cd5a894d6e7c5f70467f87f3b6b94d26016ff12c102341daa"} Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.342969 4832 generic.go:334] "Generic (PLEG): container finished" podID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerID="abfeb396abe6feacb57b252ef44080f2f739993b4c1b9ecc72f5a90ad6a56477" exitCode=0 Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.343051 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1631bf1-d29f-41d9-a70b-4441863ac0fb","Type":"ContainerDied","Data":"abfeb396abe6feacb57b252ef44080f2f739993b4c1b9ecc72f5a90ad6a56477"} Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.727694 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.760237 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.37363142 podStartE2EDuration="3.760213422s" podCreationTimestamp="2026-01-31 05:03:55 +0000 UTC" firstStartedPulling="2026-01-31 05:03:56.25094476 +0000 UTC m=+1245.199766445" lastFinishedPulling="2026-01-31 05:03:56.637526762 +0000 UTC m=+1245.586348447" observedRunningTime="2026-01-31 05:03:57.340649257 +0000 UTC m=+1246.289470942" watchObservedRunningTime="2026-01-31 05:03:58.760213422 +0000 UTC m=+1247.709035107" Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.853856 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qxql\" (UniqueName: \"kubernetes.io/projected/a1631bf1-d29f-41d9-a70b-4441863ac0fb-kube-api-access-2qxql\") pod \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.854093 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1631bf1-d29f-41d9-a70b-4441863ac0fb-log-httpd\") pod \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.854140 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1631bf1-d29f-41d9-a70b-4441863ac0fb-run-httpd\") pod \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.854184 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-combined-ca-bundle\") pod \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.854227 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-config-data\") pod \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.854389 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-sg-core-conf-yaml\") pod \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.854438 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-scripts\") pod \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\" (UID: \"a1631bf1-d29f-41d9-a70b-4441863ac0fb\") " Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.854778 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1631bf1-d29f-41d9-a70b-4441863ac0fb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a1631bf1-d29f-41d9-a70b-4441863ac0fb" (UID: "a1631bf1-d29f-41d9-a70b-4441863ac0fb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.855508 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1631bf1-d29f-41d9-a70b-4441863ac0fb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a1631bf1-d29f-41d9-a70b-4441863ac0fb" (UID: "a1631bf1-d29f-41d9-a70b-4441863ac0fb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.856541 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1631bf1-d29f-41d9-a70b-4441863ac0fb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.856599 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1631bf1-d29f-41d9-a70b-4441863ac0fb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.862872 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-scripts" (OuterVolumeSpecName: "scripts") pod "a1631bf1-d29f-41d9-a70b-4441863ac0fb" (UID: "a1631bf1-d29f-41d9-a70b-4441863ac0fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.864061 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1631bf1-d29f-41d9-a70b-4441863ac0fb-kube-api-access-2qxql" (OuterVolumeSpecName: "kube-api-access-2qxql") pod "a1631bf1-d29f-41d9-a70b-4441863ac0fb" (UID: "a1631bf1-d29f-41d9-a70b-4441863ac0fb"). InnerVolumeSpecName "kube-api-access-2qxql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.890790 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a1631bf1-d29f-41d9-a70b-4441863ac0fb" (UID: "a1631bf1-d29f-41d9-a70b-4441863ac0fb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.951721 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1631bf1-d29f-41d9-a70b-4441863ac0fb" (UID: "a1631bf1-d29f-41d9-a70b-4441863ac0fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.958512 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qxql\" (UniqueName: \"kubernetes.io/projected/a1631bf1-d29f-41d9-a70b-4441863ac0fb-kube-api-access-2qxql\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.958551 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.958580 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.958592 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:58 crc kubenswrapper[4832]: I0131 05:03:58.986316 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-config-data" (OuterVolumeSpecName: "config-data") pod "a1631bf1-d29f-41d9-a70b-4441863ac0fb" (UID: "a1631bf1-d29f-41d9-a70b-4441863ac0fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.060532 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1631bf1-d29f-41d9-a70b-4441863ac0fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.355810 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.358802 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1631bf1-d29f-41d9-a70b-4441863ac0fb","Type":"ContainerDied","Data":"567b80825d4d266702c145ee417379c51b86be791001fb2dd48d13ed6955cf8f"} Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.358869 4832 scope.go:117] "RemoveContainer" containerID="aa24381ecb1d33f2ca7298eb215fef61e0f307b8adafac52e41c3d684c074b6f" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.397697 4832 scope.go:117] "RemoveContainer" containerID="93080b0a65d16d8d75e60a94c07fb0b0b88535cddd72bb85dbe06cf45121b08f" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.408442 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.422813 4832 scope.go:117] "RemoveContainer" containerID="abfeb396abe6feacb57b252ef44080f2f739993b4c1b9ecc72f5a90ad6a56477" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.429321 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.444444 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:59 crc kubenswrapper[4832]: E0131 05:03:59.445141 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="sg-core" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.445169 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="sg-core" Jan 31 05:03:59 crc kubenswrapper[4832]: E0131 05:03:59.445213 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="proxy-httpd" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.445224 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="proxy-httpd" Jan 31 05:03:59 crc kubenswrapper[4832]: E0131 05:03:59.445256 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="ceilometer-notification-agent" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.445266 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="ceilometer-notification-agent" Jan 31 05:03:59 crc kubenswrapper[4832]: E0131 05:03:59.445277 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="ceilometer-central-agent" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.445285 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="ceilometer-central-agent" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.445549 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="sg-core" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.445597 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="ceilometer-central-agent" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.445624 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="proxy-httpd" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.445636 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" containerName="ceilometer-notification-agent" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.448239 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.451619 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.451857 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.452007 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.455106 4832 scope.go:117] "RemoveContainer" containerID="a2a7d673f85afd0cd5a894d6e7c5f70467f87f3b6b94d26016ff12c102341daa" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.457382 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.572065 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.572226 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.572331 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbf77216-c249-4ab3-8940-097819b60d51-log-httpd\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.572411 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-scripts\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.572480 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-config-data\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.572580 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.572659 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv79b\" (UniqueName: \"kubernetes.io/projected/bbf77216-c249-4ab3-8940-097819b60d51-kube-api-access-fv79b\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.572737 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbf77216-c249-4ab3-8940-097819b60d51-run-httpd\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.674840 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv79b\" (UniqueName: \"kubernetes.io/projected/bbf77216-c249-4ab3-8940-097819b60d51-kube-api-access-fv79b\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.674916 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbf77216-c249-4ab3-8940-097819b60d51-run-httpd\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.674990 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.675524 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbf77216-c249-4ab3-8940-097819b60d51-run-httpd\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.675650 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.676083 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbf77216-c249-4ab3-8940-097819b60d51-log-httpd\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.676133 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-scripts\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.676173 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-config-data\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.676216 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.676447 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbf77216-c249-4ab3-8940-097819b60d51-log-httpd\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.681463 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.682262 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-scripts\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.682765 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.683156 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-config-data\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.684888 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.701651 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv79b\" (UniqueName: \"kubernetes.io/projected/bbf77216-c249-4ab3-8940-097819b60d51-kube-api-access-fv79b\") pod \"ceilometer-0\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.704892 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.760299 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.773081 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:03:59 crc kubenswrapper[4832]: I0131 05:03:59.883093 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1631bf1-d29f-41d9-a70b-4441863ac0fb" path="/var/lib/kubelet/pods/a1631bf1-d29f-41d9-a70b-4441863ac0fb/volumes" Jan 31 05:04:00 crc kubenswrapper[4832]: I0131 05:04:00.377470 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:04:00 crc kubenswrapper[4832]: I0131 05:04:00.406835 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 05:04:00 crc kubenswrapper[4832]: I0131 05:04:00.574491 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 05:04:00 crc kubenswrapper[4832]: I0131 05:04:00.574597 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 05:04:01 crc kubenswrapper[4832]: I0131 05:04:01.382365 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbf77216-c249-4ab3-8940-097819b60d51","Type":"ContainerStarted","Data":"bde6b016ff752b98581167ef5ae21656d7db00a98cd9641e663251e9356725e2"} Jan 31 05:04:01 crc kubenswrapper[4832]: I0131 05:04:01.382808 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbf77216-c249-4ab3-8940-097819b60d51","Type":"ContainerStarted","Data":"da5066f85795cc5941b0457f972e35070115deaf0d3dbe3ab9c308e30b5b766d"} Jan 31 05:04:01 crc kubenswrapper[4832]: I0131 05:04:01.656797 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6da850fe-027d-4ad5-b024-f3f81dd1aa20" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 05:04:01 crc kubenswrapper[4832]: I0131 05:04:01.656844 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6da850fe-027d-4ad5-b024-f3f81dd1aa20" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 05:04:02 crc kubenswrapper[4832]: I0131 05:04:02.410500 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbf77216-c249-4ab3-8940-097819b60d51","Type":"ContainerStarted","Data":"552a99d599a56642305fc9c34069eb693ba69291da2ec73fd36c2d2bf9506ead"} Jan 31 05:04:03 crc kubenswrapper[4832]: I0131 05:04:03.424864 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbf77216-c249-4ab3-8940-097819b60d51","Type":"ContainerStarted","Data":"a0a1e756b3ecd6c0676cabfc895999fdf864a9e0e6a2a81a1f4a8ff54452c244"} Jan 31 05:04:04 crc kubenswrapper[4832]: I0131 05:04:04.782945 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 05:04:04 crc kubenswrapper[4832]: I0131 05:04:04.783857 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 05:04:04 crc kubenswrapper[4832]: I0131 05:04:04.796745 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 05:04:04 crc kubenswrapper[4832]: I0131 05:04:04.798500 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 05:04:05 crc kubenswrapper[4832]: I0131 05:04:05.458741 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbf77216-c249-4ab3-8940-097819b60d51","Type":"ContainerStarted","Data":"8e8bcfb72e99df0924a19c67d317fae6c78f9303569abd4a1e1933ba7dc76408"} Jan 31 05:04:05 crc kubenswrapper[4832]: I0131 05:04:05.497158 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.236446237 podStartE2EDuration="6.497133704s" podCreationTimestamp="2026-01-31 05:03:59 +0000 UTC" firstStartedPulling="2026-01-31 05:04:00.384843554 +0000 UTC m=+1249.333665229" lastFinishedPulling="2026-01-31 05:04:04.645531001 +0000 UTC m=+1253.594352696" observedRunningTime="2026-01-31 05:04:05.487635538 +0000 UTC m=+1254.436457293" watchObservedRunningTime="2026-01-31 05:04:05.497133704 +0000 UTC m=+1254.445955399" Jan 31 05:04:05 crc kubenswrapper[4832]: I0131 05:04:05.665324 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 05:04:06 crc kubenswrapper[4832]: I0131 05:04:06.472241 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.387006 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.488961 4832 generic.go:334] "Generic (PLEG): container finished" podID="2f4dfabe-ee65-4103-ab76-5beacb6e39d0" containerID="a33038ca5d354169a84433d12de625f5f17b9b18ad2997efe925ebd091734dbe" exitCode=137 Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.489073 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.489135 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2f4dfabe-ee65-4103-ab76-5beacb6e39d0","Type":"ContainerDied","Data":"a33038ca5d354169a84433d12de625f5f17b9b18ad2997efe925ebd091734dbe"} Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.489182 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2f4dfabe-ee65-4103-ab76-5beacb6e39d0","Type":"ContainerDied","Data":"a0569d878dd486c965a80dbe8667501bbfe8931d37374149f221eec4589b4d4c"} Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.489209 4832 scope.go:117] "RemoveContainer" containerID="a33038ca5d354169a84433d12de625f5f17b9b18ad2997efe925ebd091734dbe" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.515846 4832 scope.go:117] "RemoveContainer" containerID="a33038ca5d354169a84433d12de625f5f17b9b18ad2997efe925ebd091734dbe" Jan 31 05:04:07 crc kubenswrapper[4832]: E0131 05:04:07.516548 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a33038ca5d354169a84433d12de625f5f17b9b18ad2997efe925ebd091734dbe\": container with ID starting with a33038ca5d354169a84433d12de625f5f17b9b18ad2997efe925ebd091734dbe not found: ID does not exist" containerID="a33038ca5d354169a84433d12de625f5f17b9b18ad2997efe925ebd091734dbe" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.516620 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a33038ca5d354169a84433d12de625f5f17b9b18ad2997efe925ebd091734dbe"} err="failed to get container status \"a33038ca5d354169a84433d12de625f5f17b9b18ad2997efe925ebd091734dbe\": rpc error: code = NotFound desc = could not find container \"a33038ca5d354169a84433d12de625f5f17b9b18ad2997efe925ebd091734dbe\": container with ID starting with a33038ca5d354169a84433d12de625f5f17b9b18ad2997efe925ebd091734dbe not found: ID does not exist" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.587909 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-config-data\") pod \"2f4dfabe-ee65-4103-ab76-5beacb6e39d0\" (UID: \"2f4dfabe-ee65-4103-ab76-5beacb6e39d0\") " Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.588021 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-combined-ca-bundle\") pod \"2f4dfabe-ee65-4103-ab76-5beacb6e39d0\" (UID: \"2f4dfabe-ee65-4103-ab76-5beacb6e39d0\") " Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.588173 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvdks\" (UniqueName: \"kubernetes.io/projected/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-kube-api-access-bvdks\") pod \"2f4dfabe-ee65-4103-ab76-5beacb6e39d0\" (UID: \"2f4dfabe-ee65-4103-ab76-5beacb6e39d0\") " Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.614184 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-kube-api-access-bvdks" (OuterVolumeSpecName: "kube-api-access-bvdks") pod "2f4dfabe-ee65-4103-ab76-5beacb6e39d0" (UID: "2f4dfabe-ee65-4103-ab76-5beacb6e39d0"). InnerVolumeSpecName "kube-api-access-bvdks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.629523 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f4dfabe-ee65-4103-ab76-5beacb6e39d0" (UID: "2f4dfabe-ee65-4103-ab76-5beacb6e39d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.642546 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-config-data" (OuterVolumeSpecName: "config-data") pod "2f4dfabe-ee65-4103-ab76-5beacb6e39d0" (UID: "2f4dfabe-ee65-4103-ab76-5beacb6e39d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.692111 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.692153 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.692171 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvdks\" (UniqueName: \"kubernetes.io/projected/2f4dfabe-ee65-4103-ab76-5beacb6e39d0-kube-api-access-bvdks\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.837186 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.858004 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.899893 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f4dfabe-ee65-4103-ab76-5beacb6e39d0" path="/var/lib/kubelet/pods/2f4dfabe-ee65-4103-ab76-5beacb6e39d0/volumes" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.904412 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:04:07 crc kubenswrapper[4832]: E0131 05:04:07.905517 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4dfabe-ee65-4103-ab76-5beacb6e39d0" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.905606 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4dfabe-ee65-4103-ab76-5beacb6e39d0" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.909331 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f4dfabe-ee65-4103-ab76-5beacb6e39d0" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.912335 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.925007 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.925681 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.925826 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 31 05:04:07 crc kubenswrapper[4832]: I0131 05:04:07.926679 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.106429 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kzkh\" (UniqueName: \"kubernetes.io/projected/e209d1b6-1bc1-4667-ad99-4b2cf348f2b7-kube-api-access-6kzkh\") pod \"nova-cell1-novncproxy-0\" (UID: \"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.106499 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e209d1b6-1bc1-4667-ad99-4b2cf348f2b7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.106530 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e209d1b6-1bc1-4667-ad99-4b2cf348f2b7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.106576 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e209d1b6-1bc1-4667-ad99-4b2cf348f2b7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.106643 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e209d1b6-1bc1-4667-ad99-4b2cf348f2b7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.208368 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kzkh\" (UniqueName: \"kubernetes.io/projected/e209d1b6-1bc1-4667-ad99-4b2cf348f2b7-kube-api-access-6kzkh\") pod \"nova-cell1-novncproxy-0\" (UID: \"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.208977 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e209d1b6-1bc1-4667-ad99-4b2cf348f2b7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.209083 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e209d1b6-1bc1-4667-ad99-4b2cf348f2b7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.209160 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e209d1b6-1bc1-4667-ad99-4b2cf348f2b7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.209285 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e209d1b6-1bc1-4667-ad99-4b2cf348f2b7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.214463 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e209d1b6-1bc1-4667-ad99-4b2cf348f2b7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.215065 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e209d1b6-1bc1-4667-ad99-4b2cf348f2b7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.217785 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e209d1b6-1bc1-4667-ad99-4b2cf348f2b7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.231828 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e209d1b6-1bc1-4667-ad99-4b2cf348f2b7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.256803 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kzkh\" (UniqueName: \"kubernetes.io/projected/e209d1b6-1bc1-4667-ad99-4b2cf348f2b7-kube-api-access-6kzkh\") pod \"nova-cell1-novncproxy-0\" (UID: \"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:08 crc kubenswrapper[4832]: I0131 05:04:08.543108 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:09 crc kubenswrapper[4832]: W0131 05:04:09.076735 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode209d1b6_1bc1_4667_ad99_4b2cf348f2b7.slice/crio-c36fd2883d695cfdb297348364114fd5c0d704209dba4ffb456634732e39a89f WatchSource:0}: Error finding container c36fd2883d695cfdb297348364114fd5c0d704209dba4ffb456634732e39a89f: Status 404 returned error can't find the container with id c36fd2883d695cfdb297348364114fd5c0d704209dba4ffb456634732e39a89f Jan 31 05:04:09 crc kubenswrapper[4832]: I0131 05:04:09.078513 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 05:04:09 crc kubenswrapper[4832]: I0131 05:04:09.547083 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7","Type":"ContainerStarted","Data":"afa3c397574e60da81714ff788d978f83f3a025f254b889fa52314c46c68ff85"} Jan 31 05:04:09 crc kubenswrapper[4832]: I0131 05:04:09.547671 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e209d1b6-1bc1-4667-ad99-4b2cf348f2b7","Type":"ContainerStarted","Data":"c36fd2883d695cfdb297348364114fd5c0d704209dba4ffb456634732e39a89f"} Jan 31 05:04:09 crc kubenswrapper[4832]: I0131 05:04:09.572939 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.57288635 podStartE2EDuration="2.57288635s" podCreationTimestamp="2026-01-31 05:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:04:09.567267165 +0000 UTC m=+1258.516088850" watchObservedRunningTime="2026-01-31 05:04:09.57288635 +0000 UTC m=+1258.521708045" Jan 31 05:04:10 crc kubenswrapper[4832]: I0131 05:04:10.578971 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 05:04:10 crc kubenswrapper[4832]: I0131 05:04:10.579875 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 05:04:10 crc kubenswrapper[4832]: I0131 05:04:10.582258 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 05:04:10 crc kubenswrapper[4832]: I0131 05:04:10.583708 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 05:04:11 crc kubenswrapper[4832]: I0131 05:04:11.571439 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 05:04:11 crc kubenswrapper[4832]: I0131 05:04:11.585952 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 05:04:11 crc kubenswrapper[4832]: I0131 05:04:11.913461 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-gdwbp"] Jan 31 05:04:11 crc kubenswrapper[4832]: I0131 05:04:11.935228 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-gdwbp"] Jan 31 05:04:11 crc kubenswrapper[4832]: I0131 05:04:11.935441 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.044805 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.045163 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-452cd\" (UniqueName: \"kubernetes.io/projected/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-kube-api-access-452cd\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.045231 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.045345 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-config\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.045392 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.045426 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.147439 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.147503 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-452cd\" (UniqueName: \"kubernetes.io/projected/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-kube-api-access-452cd\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.147611 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.147665 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-config\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.147699 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.147756 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.148609 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.148747 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.149093 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-config\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.149152 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.149397 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.180810 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-452cd\" (UniqueName: \"kubernetes.io/projected/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-kube-api-access-452cd\") pod \"dnsmasq-dns-cd5cbd7b9-gdwbp\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.265264 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:12 crc kubenswrapper[4832]: I0131 05:04:12.754810 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-gdwbp"] Jan 31 05:04:13 crc kubenswrapper[4832]: I0131 05:04:13.543928 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:13 crc kubenswrapper[4832]: I0131 05:04:13.595175 4832 generic.go:334] "Generic (PLEG): container finished" podID="17da1585-a60a-4893-bd3d-2d76fd4ca5a1" containerID="91c3483332b00774be435e4261304f81293e761bb7a585e6d0966dc27764e336" exitCode=0 Jan 31 05:04:13 crc kubenswrapper[4832]: I0131 05:04:13.596530 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" event={"ID":"17da1585-a60a-4893-bd3d-2d76fd4ca5a1","Type":"ContainerDied","Data":"91c3483332b00774be435e4261304f81293e761bb7a585e6d0966dc27764e336"} Jan 31 05:04:13 crc kubenswrapper[4832]: I0131 05:04:13.596596 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" event={"ID":"17da1585-a60a-4893-bd3d-2d76fd4ca5a1","Type":"ContainerStarted","Data":"c3c40353435f45f4d35d3df405bda093c22082082f6dab31981dd53700e4cc61"} Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.165049 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.165830 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="ceilometer-central-agent" containerID="cri-o://bde6b016ff752b98581167ef5ae21656d7db00a98cd9641e663251e9356725e2" gracePeriod=30 Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.166008 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="proxy-httpd" containerID="cri-o://8e8bcfb72e99df0924a19c67d317fae6c78f9303569abd4a1e1933ba7dc76408" gracePeriod=30 Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.166065 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="sg-core" containerID="cri-o://a0a1e756b3ecd6c0676cabfc895999fdf864a9e0e6a2a81a1f4a8ff54452c244" gracePeriod=30 Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.166106 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="ceilometer-notification-agent" containerID="cri-o://552a99d599a56642305fc9c34069eb693ba69291da2ec73fd36c2d2bf9506ead" gracePeriod=30 Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.181602 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.202:3000/\": EOF" Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.584477 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.633256 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" event={"ID":"17da1585-a60a-4893-bd3d-2d76fd4ca5a1","Type":"ContainerStarted","Data":"2a3cbbfe8194c045fa4f37f0a1d5baae529d3f87b5bb9ed17c41cfd4e8376159"} Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.634280 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.638137 4832 generic.go:334] "Generic (PLEG): container finished" podID="bbf77216-c249-4ab3-8940-097819b60d51" containerID="8e8bcfb72e99df0924a19c67d317fae6c78f9303569abd4a1e1933ba7dc76408" exitCode=0 Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.638186 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbf77216-c249-4ab3-8940-097819b60d51","Type":"ContainerDied","Data":"8e8bcfb72e99df0924a19c67d317fae6c78f9303569abd4a1e1933ba7dc76408"} Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.638212 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbf77216-c249-4ab3-8940-097819b60d51","Type":"ContainerDied","Data":"a0a1e756b3ecd6c0676cabfc895999fdf864a9e0e6a2a81a1f4a8ff54452c244"} Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.638231 4832 generic.go:334] "Generic (PLEG): container finished" podID="bbf77216-c249-4ab3-8940-097819b60d51" containerID="a0a1e756b3ecd6c0676cabfc895999fdf864a9e0e6a2a81a1f4a8ff54452c244" exitCode=2 Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.638242 4832 generic.go:334] "Generic (PLEG): container finished" podID="bbf77216-c249-4ab3-8940-097819b60d51" containerID="bde6b016ff752b98581167ef5ae21656d7db00a98cd9641e663251e9356725e2" exitCode=0 Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.638435 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6da850fe-027d-4ad5-b024-f3f81dd1aa20" containerName="nova-api-log" containerID="cri-o://0cdac3f90ff49524962f63f894017cf54348b2b08d7a575968c0bafe41ad856b" gracePeriod=30 Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.638508 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbf77216-c249-4ab3-8940-097819b60d51","Type":"ContainerDied","Data":"bde6b016ff752b98581167ef5ae21656d7db00a98cd9641e663251e9356725e2"} Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.638590 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6da850fe-027d-4ad5-b024-f3f81dd1aa20" containerName="nova-api-api" containerID="cri-o://a05c850c3c6dac0bd7ebb1b44140926de3ae838481fc7456a9d1f455c85be48a" gracePeriod=30 Jan 31 05:04:14 crc kubenswrapper[4832]: I0131 05:04:14.655653 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" podStartSLOduration=3.655627979 podStartE2EDuration="3.655627979s" podCreationTimestamp="2026-01-31 05:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:04:14.654126013 +0000 UTC m=+1263.602947698" watchObservedRunningTime="2026-01-31 05:04:14.655627979 +0000 UTC m=+1263.604449664" Jan 31 05:04:15 crc kubenswrapper[4832]: I0131 05:04:15.651316 4832 generic.go:334] "Generic (PLEG): container finished" podID="6da850fe-027d-4ad5-b024-f3f81dd1aa20" containerID="0cdac3f90ff49524962f63f894017cf54348b2b08d7a575968c0bafe41ad856b" exitCode=143 Jan 31 05:04:15 crc kubenswrapper[4832]: I0131 05:04:15.651547 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6da850fe-027d-4ad5-b024-f3f81dd1aa20","Type":"ContainerDied","Data":"0cdac3f90ff49524962f63f894017cf54348b2b08d7a575968c0bafe41ad856b"} Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.435607 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.440184 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.522373 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6da850fe-027d-4ad5-b024-f3f81dd1aa20-logs\") pod \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.522431 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv79b\" (UniqueName: \"kubernetes.io/projected/bbf77216-c249-4ab3-8940-097819b60d51-kube-api-access-fv79b\") pod \"bbf77216-c249-4ab3-8940-097819b60d51\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.522512 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-scripts\") pod \"bbf77216-c249-4ab3-8940-097819b60d51\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.522542 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-config-data\") pod \"bbf77216-c249-4ab3-8940-097819b60d51\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.522588 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da850fe-027d-4ad5-b024-f3f81dd1aa20-combined-ca-bundle\") pod \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.522624 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-combined-ca-bundle\") pod \"bbf77216-c249-4ab3-8940-097819b60d51\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.522712 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-sg-core-conf-yaml\") pod \"bbf77216-c249-4ab3-8940-097819b60d51\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.522776 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da850fe-027d-4ad5-b024-f3f81dd1aa20-config-data\") pod \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.522853 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbf77216-c249-4ab3-8940-097819b60d51-log-httpd\") pod \"bbf77216-c249-4ab3-8940-097819b60d51\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.522902 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sck9c\" (UniqueName: \"kubernetes.io/projected/6da850fe-027d-4ad5-b024-f3f81dd1aa20-kube-api-access-sck9c\") pod \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\" (UID: \"6da850fe-027d-4ad5-b024-f3f81dd1aa20\") " Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.522953 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-ceilometer-tls-certs\") pod \"bbf77216-c249-4ab3-8940-097819b60d51\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.522978 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbf77216-c249-4ab3-8940-097819b60d51-run-httpd\") pod \"bbf77216-c249-4ab3-8940-097819b60d51\" (UID: \"bbf77216-c249-4ab3-8940-097819b60d51\") " Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.524130 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbf77216-c249-4ab3-8940-097819b60d51-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bbf77216-c249-4ab3-8940-097819b60d51" (UID: "bbf77216-c249-4ab3-8940-097819b60d51"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.524184 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbf77216-c249-4ab3-8940-097819b60d51-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bbf77216-c249-4ab3-8940-097819b60d51" (UID: "bbf77216-c249-4ab3-8940-097819b60d51"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.524195 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6da850fe-027d-4ad5-b024-f3f81dd1aa20-logs" (OuterVolumeSpecName: "logs") pod "6da850fe-027d-4ad5-b024-f3f81dd1aa20" (UID: "6da850fe-027d-4ad5-b024-f3f81dd1aa20"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.533839 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da850fe-027d-4ad5-b024-f3f81dd1aa20-kube-api-access-sck9c" (OuterVolumeSpecName: "kube-api-access-sck9c") pod "6da850fe-027d-4ad5-b024-f3f81dd1aa20" (UID: "6da850fe-027d-4ad5-b024-f3f81dd1aa20"). InnerVolumeSpecName "kube-api-access-sck9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.537779 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-scripts" (OuterVolumeSpecName: "scripts") pod "bbf77216-c249-4ab3-8940-097819b60d51" (UID: "bbf77216-c249-4ab3-8940-097819b60d51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.554134 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.554139 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbf77216-c249-4ab3-8940-097819b60d51-kube-api-access-fv79b" (OuterVolumeSpecName: "kube-api-access-fv79b") pod "bbf77216-c249-4ab3-8940-097819b60d51" (UID: "bbf77216-c249-4ab3-8940-097819b60d51"). InnerVolumeSpecName "kube-api-access-fv79b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.554211 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.554657 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.567420 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bbf77216-c249-4ab3-8940-097819b60d51" (UID: "bbf77216-c249-4ab3-8940-097819b60d51"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.584882 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da850fe-027d-4ad5-b024-f3f81dd1aa20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6da850fe-027d-4ad5-b024-f3f81dd1aa20" (UID: "6da850fe-027d-4ad5-b024-f3f81dd1aa20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.592964 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da850fe-027d-4ad5-b024-f3f81dd1aa20-config-data" (OuterVolumeSpecName: "config-data") pod "6da850fe-027d-4ad5-b024-f3f81dd1aa20" (UID: "6da850fe-027d-4ad5-b024-f3f81dd1aa20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.596543 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.626221 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sck9c\" (UniqueName: \"kubernetes.io/projected/6da850fe-027d-4ad5-b024-f3f81dd1aa20-kube-api-access-sck9c\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.626267 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbf77216-c249-4ab3-8940-097819b60d51-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.626281 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6da850fe-027d-4ad5-b024-f3f81dd1aa20-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.626294 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv79b\" (UniqueName: \"kubernetes.io/projected/bbf77216-c249-4ab3-8940-097819b60d51-kube-api-access-fv79b\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.626306 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.626316 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da850fe-027d-4ad5-b024-f3f81dd1aa20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.626326 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.626337 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da850fe-027d-4ad5-b024-f3f81dd1aa20-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.626348 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbf77216-c249-4ab3-8940-097819b60d51-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.634075 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bbf77216-c249-4ab3-8940-097819b60d51" (UID: "bbf77216-c249-4ab3-8940-097819b60d51"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.642056 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbf77216-c249-4ab3-8940-097819b60d51" (UID: "bbf77216-c249-4ab3-8940-097819b60d51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.691905 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-config-data" (OuterVolumeSpecName: "config-data") pod "bbf77216-c249-4ab3-8940-097819b60d51" (UID: "bbf77216-c249-4ab3-8940-097819b60d51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.697340 4832 generic.go:334] "Generic (PLEG): container finished" podID="bbf77216-c249-4ab3-8940-097819b60d51" containerID="552a99d599a56642305fc9c34069eb693ba69291da2ec73fd36c2d2bf9506ead" exitCode=0 Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.697407 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbf77216-c249-4ab3-8940-097819b60d51","Type":"ContainerDied","Data":"552a99d599a56642305fc9c34069eb693ba69291da2ec73fd36c2d2bf9506ead"} Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.697449 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.697502 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbf77216-c249-4ab3-8940-097819b60d51","Type":"ContainerDied","Data":"da5066f85795cc5941b0457f972e35070115deaf0d3dbe3ab9c308e30b5b766d"} Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.697528 4832 scope.go:117] "RemoveContainer" containerID="8e8bcfb72e99df0924a19c67d317fae6c78f9303569abd4a1e1933ba7dc76408" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.705460 4832 generic.go:334] "Generic (PLEG): container finished" podID="6da850fe-027d-4ad5-b024-f3f81dd1aa20" containerID="a05c850c3c6dac0bd7ebb1b44140926de3ae838481fc7456a9d1f455c85be48a" exitCode=0 Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.706291 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.706969 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6da850fe-027d-4ad5-b024-f3f81dd1aa20","Type":"ContainerDied","Data":"a05c850c3c6dac0bd7ebb1b44140926de3ae838481fc7456a9d1f455c85be48a"} Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.707009 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6da850fe-027d-4ad5-b024-f3f81dd1aa20","Type":"ContainerDied","Data":"7f41e16ad5ad690c7e27be8b37dc8e3bf0768f821d5d8e690a6bdd23866a9e95"} Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.725147 4832 scope.go:117] "RemoveContainer" containerID="a0a1e756b3ecd6c0676cabfc895999fdf864a9e0e6a2a81a1f4a8ff54452c244" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.729190 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.729950 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.729983 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.729993 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbf77216-c249-4ab3-8940-097819b60d51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.732757 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.742817 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.772971 4832 scope.go:117] "RemoveContainer" containerID="552a99d599a56642305fc9c34069eb693ba69291da2ec73fd36c2d2bf9506ead" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.773913 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.797054 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.820259 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:04:18 crc kubenswrapper[4832]: E0131 05:04:18.820886 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="sg-core" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.820906 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="sg-core" Jan 31 05:04:18 crc kubenswrapper[4832]: E0131 05:04:18.820928 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da850fe-027d-4ad5-b024-f3f81dd1aa20" containerName="nova-api-api" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.820934 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da850fe-027d-4ad5-b024-f3f81dd1aa20" containerName="nova-api-api" Jan 31 05:04:18 crc kubenswrapper[4832]: E0131 05:04:18.820958 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="proxy-httpd" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.820965 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="proxy-httpd" Jan 31 05:04:18 crc kubenswrapper[4832]: E0131 05:04:18.820994 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da850fe-027d-4ad5-b024-f3f81dd1aa20" containerName="nova-api-log" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.821001 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da850fe-027d-4ad5-b024-f3f81dd1aa20" containerName="nova-api-log" Jan 31 05:04:18 crc kubenswrapper[4832]: E0131 05:04:18.821016 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="ceilometer-central-agent" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.821022 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="ceilometer-central-agent" Jan 31 05:04:18 crc kubenswrapper[4832]: E0131 05:04:18.821034 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="ceilometer-notification-agent" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.821040 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="ceilometer-notification-agent" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.821372 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="sg-core" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.821418 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6da850fe-027d-4ad5-b024-f3f81dd1aa20" containerName="nova-api-api" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.821426 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="ceilometer-notification-agent" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.821437 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="ceilometer-central-agent" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.821446 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbf77216-c249-4ab3-8940-097819b60d51" containerName="proxy-httpd" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.821507 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6da850fe-027d-4ad5-b024-f3f81dd1aa20" containerName="nova-api-log" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.827359 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.831645 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.831911 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.832045 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.835401 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.848612 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.862834 4832 scope.go:117] "RemoveContainer" containerID="bde6b016ff752b98581167ef5ae21656d7db00a98cd9641e663251e9356725e2" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.868113 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.872240 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.872323 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.872438 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.874506 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.939982 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9xg6\" (UniqueName: \"kubernetes.io/projected/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-kube-api-access-x9xg6\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.940039 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-public-tls-certs\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.940264 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9305639f-a8a1-4742-b3d9-fe416bcef2cd-run-httpd\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.940380 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9305639f-a8a1-4742-b3d9-fe416bcef2cd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.940435 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9305639f-a8a1-4742-b3d9-fe416bcef2cd-config-data\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.940507 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9305639f-a8a1-4742-b3d9-fe416bcef2cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.941321 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-config-data\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.941382 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.941423 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz4fs\" (UniqueName: \"kubernetes.io/projected/9305639f-a8a1-4742-b3d9-fe416bcef2cd-kube-api-access-jz4fs\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.941500 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.941533 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9305639f-a8a1-4742-b3d9-fe416bcef2cd-scripts\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.941719 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9305639f-a8a1-4742-b3d9-fe416bcef2cd-log-httpd\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.941893 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9305639f-a8a1-4742-b3d9-fe416bcef2cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.941956 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-logs\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.960241 4832 scope.go:117] "RemoveContainer" containerID="8e8bcfb72e99df0924a19c67d317fae6c78f9303569abd4a1e1933ba7dc76408" Jan 31 05:04:18 crc kubenswrapper[4832]: E0131 05:04:18.962214 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e8bcfb72e99df0924a19c67d317fae6c78f9303569abd4a1e1933ba7dc76408\": container with ID starting with 8e8bcfb72e99df0924a19c67d317fae6c78f9303569abd4a1e1933ba7dc76408 not found: ID does not exist" containerID="8e8bcfb72e99df0924a19c67d317fae6c78f9303569abd4a1e1933ba7dc76408" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.962277 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8bcfb72e99df0924a19c67d317fae6c78f9303569abd4a1e1933ba7dc76408"} err="failed to get container status \"8e8bcfb72e99df0924a19c67d317fae6c78f9303569abd4a1e1933ba7dc76408\": rpc error: code = NotFound desc = could not find container \"8e8bcfb72e99df0924a19c67d317fae6c78f9303569abd4a1e1933ba7dc76408\": container with ID starting with 8e8bcfb72e99df0924a19c67d317fae6c78f9303569abd4a1e1933ba7dc76408 not found: ID does not exist" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.962313 4832 scope.go:117] "RemoveContainer" containerID="a0a1e756b3ecd6c0676cabfc895999fdf864a9e0e6a2a81a1f4a8ff54452c244" Jan 31 05:04:18 crc kubenswrapper[4832]: E0131 05:04:18.962981 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0a1e756b3ecd6c0676cabfc895999fdf864a9e0e6a2a81a1f4a8ff54452c244\": container with ID starting with a0a1e756b3ecd6c0676cabfc895999fdf864a9e0e6a2a81a1f4a8ff54452c244 not found: ID does not exist" containerID="a0a1e756b3ecd6c0676cabfc895999fdf864a9e0e6a2a81a1f4a8ff54452c244" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.963131 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a1e756b3ecd6c0676cabfc895999fdf864a9e0e6a2a81a1f4a8ff54452c244"} err="failed to get container status \"a0a1e756b3ecd6c0676cabfc895999fdf864a9e0e6a2a81a1f4a8ff54452c244\": rpc error: code = NotFound desc = could not find container \"a0a1e756b3ecd6c0676cabfc895999fdf864a9e0e6a2a81a1f4a8ff54452c244\": container with ID starting with a0a1e756b3ecd6c0676cabfc895999fdf864a9e0e6a2a81a1f4a8ff54452c244 not found: ID does not exist" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.963154 4832 scope.go:117] "RemoveContainer" containerID="552a99d599a56642305fc9c34069eb693ba69291da2ec73fd36c2d2bf9506ead" Jan 31 05:04:18 crc kubenswrapper[4832]: E0131 05:04:18.970770 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552a99d599a56642305fc9c34069eb693ba69291da2ec73fd36c2d2bf9506ead\": container with ID starting with 552a99d599a56642305fc9c34069eb693ba69291da2ec73fd36c2d2bf9506ead not found: ID does not exist" containerID="552a99d599a56642305fc9c34069eb693ba69291da2ec73fd36c2d2bf9506ead" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.970857 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552a99d599a56642305fc9c34069eb693ba69291da2ec73fd36c2d2bf9506ead"} err="failed to get container status \"552a99d599a56642305fc9c34069eb693ba69291da2ec73fd36c2d2bf9506ead\": rpc error: code = NotFound desc = could not find container \"552a99d599a56642305fc9c34069eb693ba69291da2ec73fd36c2d2bf9506ead\": container with ID starting with 552a99d599a56642305fc9c34069eb693ba69291da2ec73fd36c2d2bf9506ead not found: ID does not exist" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.970901 4832 scope.go:117] "RemoveContainer" containerID="bde6b016ff752b98581167ef5ae21656d7db00a98cd9641e663251e9356725e2" Jan 31 05:04:18 crc kubenswrapper[4832]: E0131 05:04:18.972059 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde6b016ff752b98581167ef5ae21656d7db00a98cd9641e663251e9356725e2\": container with ID starting with bde6b016ff752b98581167ef5ae21656d7db00a98cd9641e663251e9356725e2 not found: ID does not exist" containerID="bde6b016ff752b98581167ef5ae21656d7db00a98cd9641e663251e9356725e2" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.972091 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde6b016ff752b98581167ef5ae21656d7db00a98cd9641e663251e9356725e2"} err="failed to get container status \"bde6b016ff752b98581167ef5ae21656d7db00a98cd9641e663251e9356725e2\": rpc error: code = NotFound desc = could not find container \"bde6b016ff752b98581167ef5ae21656d7db00a98cd9641e663251e9356725e2\": container with ID starting with bde6b016ff752b98581167ef5ae21656d7db00a98cd9641e663251e9356725e2 not found: ID does not exist" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.972109 4832 scope.go:117] "RemoveContainer" containerID="a05c850c3c6dac0bd7ebb1b44140926de3ae838481fc7456a9d1f455c85be48a" Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.990136 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-698zs"] Jan 31 05:04:18 crc kubenswrapper[4832]: I0131 05:04:18.999672 4832 scope.go:117] "RemoveContainer" containerID="0cdac3f90ff49524962f63f894017cf54348b2b08d7a575968c0bafe41ad856b" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.004359 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.007734 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.008055 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.008534 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-698zs"] Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.027125 4832 scope.go:117] "RemoveContainer" containerID="a05c850c3c6dac0bd7ebb1b44140926de3ae838481fc7456a9d1f455c85be48a" Jan 31 05:04:19 crc kubenswrapper[4832]: E0131 05:04:19.027773 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a05c850c3c6dac0bd7ebb1b44140926de3ae838481fc7456a9d1f455c85be48a\": container with ID starting with a05c850c3c6dac0bd7ebb1b44140926de3ae838481fc7456a9d1f455c85be48a not found: ID does not exist" containerID="a05c850c3c6dac0bd7ebb1b44140926de3ae838481fc7456a9d1f455c85be48a" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.027841 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a05c850c3c6dac0bd7ebb1b44140926de3ae838481fc7456a9d1f455c85be48a"} err="failed to get container status \"a05c850c3c6dac0bd7ebb1b44140926de3ae838481fc7456a9d1f455c85be48a\": rpc error: code = NotFound desc = could not find container \"a05c850c3c6dac0bd7ebb1b44140926de3ae838481fc7456a9d1f455c85be48a\": container with ID starting with a05c850c3c6dac0bd7ebb1b44140926de3ae838481fc7456a9d1f455c85be48a not found: ID does not exist" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.027878 4832 scope.go:117] "RemoveContainer" containerID="0cdac3f90ff49524962f63f894017cf54348b2b08d7a575968c0bafe41ad856b" Jan 31 05:04:19 crc kubenswrapper[4832]: E0131 05:04:19.028322 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cdac3f90ff49524962f63f894017cf54348b2b08d7a575968c0bafe41ad856b\": container with ID starting with 0cdac3f90ff49524962f63f894017cf54348b2b08d7a575968c0bafe41ad856b not found: ID does not exist" containerID="0cdac3f90ff49524962f63f894017cf54348b2b08d7a575968c0bafe41ad856b" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.028376 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cdac3f90ff49524962f63f894017cf54348b2b08d7a575968c0bafe41ad856b"} err="failed to get container status \"0cdac3f90ff49524962f63f894017cf54348b2b08d7a575968c0bafe41ad856b\": rpc error: code = NotFound desc = could not find container \"0cdac3f90ff49524962f63f894017cf54348b2b08d7a575968c0bafe41ad856b\": container with ID starting with 0cdac3f90ff49524962f63f894017cf54348b2b08d7a575968c0bafe41ad856b not found: ID does not exist" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044321 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9305639f-a8a1-4742-b3d9-fe416bcef2cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044380 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-logs\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044409 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-config-data\") pod \"nova-cell1-cell-mapping-698zs\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044464 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9xg6\" (UniqueName: \"kubernetes.io/projected/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-kube-api-access-x9xg6\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044485 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-public-tls-certs\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044512 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9305639f-a8a1-4742-b3d9-fe416bcef2cd-run-httpd\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044534 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcf8r\" (UniqueName: \"kubernetes.io/projected/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-kube-api-access-rcf8r\") pod \"nova-cell1-cell-mapping-698zs\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044571 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9305639f-a8a1-4742-b3d9-fe416bcef2cd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044593 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9305639f-a8a1-4742-b3d9-fe416bcef2cd-config-data\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044616 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9305639f-a8a1-4742-b3d9-fe416bcef2cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044649 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-698zs\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044676 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-config-data\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044712 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044732 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz4fs\" (UniqueName: \"kubernetes.io/projected/9305639f-a8a1-4742-b3d9-fe416bcef2cd-kube-api-access-jz4fs\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044757 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044773 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9305639f-a8a1-4742-b3d9-fe416bcef2cd-scripts\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044811 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9305639f-a8a1-4742-b3d9-fe416bcef2cd-log-httpd\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.044832 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-scripts\") pod \"nova-cell1-cell-mapping-698zs\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.045227 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-logs\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.045886 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9305639f-a8a1-4742-b3d9-fe416bcef2cd-run-httpd\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.048823 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9305639f-a8a1-4742-b3d9-fe416bcef2cd-log-httpd\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.051512 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9305639f-a8a1-4742-b3d9-fe416bcef2cd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.051659 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.051890 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-config-data\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.052933 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9305639f-a8a1-4742-b3d9-fe416bcef2cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.053245 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9305639f-a8a1-4742-b3d9-fe416bcef2cd-config-data\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.053544 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-public-tls-certs\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.053973 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9305639f-a8a1-4742-b3d9-fe416bcef2cd-scripts\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.054442 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9305639f-a8a1-4742-b3d9-fe416bcef2cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.060334 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.067052 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9xg6\" (UniqueName: \"kubernetes.io/projected/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-kube-api-access-x9xg6\") pod \"nova-api-0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " pod="openstack/nova-api-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.067185 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz4fs\" (UniqueName: \"kubernetes.io/projected/9305639f-a8a1-4742-b3d9-fe416bcef2cd-kube-api-access-jz4fs\") pod \"ceilometer-0\" (UID: \"9305639f-a8a1-4742-b3d9-fe416bcef2cd\") " pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.150236 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcf8r\" (UniqueName: \"kubernetes.io/projected/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-kube-api-access-rcf8r\") pod \"nova-cell1-cell-mapping-698zs\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.150343 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-698zs\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.150435 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-scripts\") pod \"nova-cell1-cell-mapping-698zs\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.150497 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-config-data\") pod \"nova-cell1-cell-mapping-698zs\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.154933 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-config-data\") pod \"nova-cell1-cell-mapping-698zs\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.158629 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-scripts\") pod \"nova-cell1-cell-mapping-698zs\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.158663 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-698zs\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.165060 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.173678 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcf8r\" (UniqueName: \"kubernetes.io/projected/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-kube-api-access-rcf8r\") pod \"nova-cell1-cell-mapping-698zs\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.260348 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.329335 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:19 crc kubenswrapper[4832]: W0131 05:04:19.746059 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9305639f_a8a1_4742_b3d9_fe416bcef2cd.slice/crio-28ccadb174b3bf6a87ff07332d6b4deb49689e8fda7d5388f55db7bc22279ee8 WatchSource:0}: Error finding container 28ccadb174b3bf6a87ff07332d6b4deb49689e8fda7d5388f55db7bc22279ee8: Status 404 returned error can't find the container with id 28ccadb174b3bf6a87ff07332d6b4deb49689e8fda7d5388f55db7bc22279ee8 Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.746683 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.838746 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-698zs"] Jan 31 05:04:19 crc kubenswrapper[4832]: W0131 05:04:19.845746 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cec60ba_fa92_4bdd_9c03_d535d7cd40c0.slice/crio-0bbc415472db2e3901b82dc842c0642fbc6d03ffe794d0326896885cb8219a93 WatchSource:0}: Error finding container 0bbc415472db2e3901b82dc842c0642fbc6d03ffe794d0326896885cb8219a93: Status 404 returned error can't find the container with id 0bbc415472db2e3901b82dc842c0642fbc6d03ffe794d0326896885cb8219a93 Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.851995 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.874175 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6da850fe-027d-4ad5-b024-f3f81dd1aa20" path="/var/lib/kubelet/pods/6da850fe-027d-4ad5-b024-f3f81dd1aa20/volumes" Jan 31 05:04:19 crc kubenswrapper[4832]: I0131 05:04:19.875114 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbf77216-c249-4ab3-8940-097819b60d51" path="/var/lib/kubelet/pods/bbf77216-c249-4ab3-8940-097819b60d51/volumes" Jan 31 05:04:20 crc kubenswrapper[4832]: I0131 05:04:20.740675 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0","Type":"ContainerStarted","Data":"29f567b376f0fb19c0028d3fd66deef9e121dc1bfaa01e09056a488de1177461"} Jan 31 05:04:20 crc kubenswrapper[4832]: I0131 05:04:20.741132 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0","Type":"ContainerStarted","Data":"b4f2a6d65161d5898ba82bfaf3120bbbc44b9ed92599f9c33876de5c8a403102"} Jan 31 05:04:20 crc kubenswrapper[4832]: I0131 05:04:20.741152 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0","Type":"ContainerStarted","Data":"0bbc415472db2e3901b82dc842c0642fbc6d03ffe794d0326896885cb8219a93"} Jan 31 05:04:20 crc kubenswrapper[4832]: I0131 05:04:20.742809 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-698zs" event={"ID":"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1","Type":"ContainerStarted","Data":"67ea7412fe852647062dfbe0a35ea651ad709dc0c0cb3188731f82144faa72d1"} Jan 31 05:04:20 crc kubenswrapper[4832]: I0131 05:04:20.742884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-698zs" event={"ID":"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1","Type":"ContainerStarted","Data":"4f56b4894e180f34bf538add0fcde014765692606decd199744e0bca29e04378"} Jan 31 05:04:20 crc kubenswrapper[4832]: I0131 05:04:20.745357 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9305639f-a8a1-4742-b3d9-fe416bcef2cd","Type":"ContainerStarted","Data":"ca45f5b29893411569bbe831ef3bfb20abbc4cd91e26e631d6a79ae88ea9f27f"} Jan 31 05:04:20 crc kubenswrapper[4832]: I0131 05:04:20.745412 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9305639f-a8a1-4742-b3d9-fe416bcef2cd","Type":"ContainerStarted","Data":"28ccadb174b3bf6a87ff07332d6b4deb49689e8fda7d5388f55db7bc22279ee8"} Jan 31 05:04:20 crc kubenswrapper[4832]: I0131 05:04:20.798712 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.798679813 podStartE2EDuration="2.798679813s" podCreationTimestamp="2026-01-31 05:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:04:20.768478814 +0000 UTC m=+1269.717300539" watchObservedRunningTime="2026-01-31 05:04:20.798679813 +0000 UTC m=+1269.747501528" Jan 31 05:04:20 crc kubenswrapper[4832]: I0131 05:04:20.799812 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-698zs" podStartSLOduration=2.7998022689999997 podStartE2EDuration="2.799802269s" podCreationTimestamp="2026-01-31 05:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:04:20.784296036 +0000 UTC m=+1269.733117721" watchObservedRunningTime="2026-01-31 05:04:20.799802269 +0000 UTC m=+1269.748623984" Jan 31 05:04:21 crc kubenswrapper[4832]: I0131 05:04:21.785681 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9305639f-a8a1-4742-b3d9-fe416bcef2cd","Type":"ContainerStarted","Data":"40819362c28299bf74d6d0ddb1718ecfc5b5328950eed7d0b0009f581802c62d"} Jan 31 05:04:22 crc kubenswrapper[4832]: I0131 05:04:22.266837 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:04:22 crc kubenswrapper[4832]: I0131 05:04:22.343393 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bv49m"] Jan 31 05:04:22 crc kubenswrapper[4832]: I0131 05:04:22.343814 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-bv49m" podUID="f8c28478-7f9b-4d59-8b06-b434b9110244" containerName="dnsmasq-dns" containerID="cri-o://3890e9ab04c1da55c46a9e05bc800806f5f369b6aaa87ce1f5ee52eacbd709c6" gracePeriod=10 Jan 31 05:04:22 crc kubenswrapper[4832]: I0131 05:04:22.814171 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9305639f-a8a1-4742-b3d9-fe416bcef2cd","Type":"ContainerStarted","Data":"8db3415d5e3265d9939eac77f05c45d67bab38df46ae41efd11a54113ba29ee1"} Jan 31 05:04:22 crc kubenswrapper[4832]: I0131 05:04:22.838823 4832 generic.go:334] "Generic (PLEG): container finished" podID="f8c28478-7f9b-4d59-8b06-b434b9110244" containerID="3890e9ab04c1da55c46a9e05bc800806f5f369b6aaa87ce1f5ee52eacbd709c6" exitCode=0 Jan 31 05:04:22 crc kubenswrapper[4832]: I0131 05:04:22.838891 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bv49m" event={"ID":"f8c28478-7f9b-4d59-8b06-b434b9110244","Type":"ContainerDied","Data":"3890e9ab04c1da55c46a9e05bc800806f5f369b6aaa87ce1f5ee52eacbd709c6"} Jan 31 05:04:22 crc kubenswrapper[4832]: I0131 05:04:22.915230 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:04:22 crc kubenswrapper[4832]: I0131 05:04:22.964670 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzfw9\" (UniqueName: \"kubernetes.io/projected/f8c28478-7f9b-4d59-8b06-b434b9110244-kube-api-access-lzfw9\") pod \"f8c28478-7f9b-4d59-8b06-b434b9110244\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " Jan 31 05:04:22 crc kubenswrapper[4832]: I0131 05:04:22.964773 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-config\") pod \"f8c28478-7f9b-4d59-8b06-b434b9110244\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " Jan 31 05:04:22 crc kubenswrapper[4832]: I0131 05:04:22.964839 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-dns-svc\") pod \"f8c28478-7f9b-4d59-8b06-b434b9110244\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " Jan 31 05:04:22 crc kubenswrapper[4832]: I0131 05:04:22.964997 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-dns-swift-storage-0\") pod \"f8c28478-7f9b-4d59-8b06-b434b9110244\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " Jan 31 05:04:22 crc kubenswrapper[4832]: I0131 05:04:22.965055 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-ovsdbserver-sb\") pod \"f8c28478-7f9b-4d59-8b06-b434b9110244\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " Jan 31 05:04:22 crc kubenswrapper[4832]: I0131 05:04:22.965180 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-ovsdbserver-nb\") pod \"f8c28478-7f9b-4d59-8b06-b434b9110244\" (UID: \"f8c28478-7f9b-4d59-8b06-b434b9110244\") " Jan 31 05:04:22 crc kubenswrapper[4832]: I0131 05:04:22.975794 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8c28478-7f9b-4d59-8b06-b434b9110244-kube-api-access-lzfw9" (OuterVolumeSpecName: "kube-api-access-lzfw9") pod "f8c28478-7f9b-4d59-8b06-b434b9110244" (UID: "f8c28478-7f9b-4d59-8b06-b434b9110244"). InnerVolumeSpecName "kube-api-access-lzfw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.044283 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f8c28478-7f9b-4d59-8b06-b434b9110244" (UID: "f8c28478-7f9b-4d59-8b06-b434b9110244"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.058014 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-config" (OuterVolumeSpecName: "config") pod "f8c28478-7f9b-4d59-8b06-b434b9110244" (UID: "f8c28478-7f9b-4d59-8b06-b434b9110244"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.067260 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.067287 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzfw9\" (UniqueName: \"kubernetes.io/projected/f8c28478-7f9b-4d59-8b06-b434b9110244-kube-api-access-lzfw9\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.067302 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.077159 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f8c28478-7f9b-4d59-8b06-b434b9110244" (UID: "f8c28478-7f9b-4d59-8b06-b434b9110244"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.079283 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f8c28478-7f9b-4d59-8b06-b434b9110244" (UID: "f8c28478-7f9b-4d59-8b06-b434b9110244"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.082582 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f8c28478-7f9b-4d59-8b06-b434b9110244" (UID: "f8c28478-7f9b-4d59-8b06-b434b9110244"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.169284 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.169330 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.169342 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8c28478-7f9b-4d59-8b06-b434b9110244-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.855753 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bv49m" event={"ID":"f8c28478-7f9b-4d59-8b06-b434b9110244","Type":"ContainerDied","Data":"c71f13f56ac20169ec3c2ee00c31b16941fcda7250d71483b4cd91638179b88e"} Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.855842 4832 scope.go:117] "RemoveContainer" containerID="3890e9ab04c1da55c46a9e05bc800806f5f369b6aaa87ce1f5ee52eacbd709c6" Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.855879 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bv49m" Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.898155 4832 scope.go:117] "RemoveContainer" containerID="27e3435cb8b4ae4586c096ad64510ae5e7ed2a51d3ea2c917735fac608d826ee" Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.943831 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bv49m"] Jan 31 05:04:23 crc kubenswrapper[4832]: I0131 05:04:23.962273 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bv49m"] Jan 31 05:04:24 crc kubenswrapper[4832]: I0131 05:04:24.887776 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9305639f-a8a1-4742-b3d9-fe416bcef2cd","Type":"ContainerStarted","Data":"d71966eacf89f38185b22f78e7309495f2498a3e059cccead797624e8e963d5b"} Jan 31 05:04:24 crc kubenswrapper[4832]: I0131 05:04:24.888451 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 05:04:25 crc kubenswrapper[4832]: I0131 05:04:25.882193 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8c28478-7f9b-4d59-8b06-b434b9110244" path="/var/lib/kubelet/pods/f8c28478-7f9b-4d59-8b06-b434b9110244/volumes" Jan 31 05:04:25 crc kubenswrapper[4832]: I0131 05:04:25.911787 4832 generic.go:334] "Generic (PLEG): container finished" podID="c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1" containerID="67ea7412fe852647062dfbe0a35ea651ad709dc0c0cb3188731f82144faa72d1" exitCode=0 Jan 31 05:04:25 crc kubenswrapper[4832]: I0131 05:04:25.913450 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-698zs" event={"ID":"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1","Type":"ContainerDied","Data":"67ea7412fe852647062dfbe0a35ea651ad709dc0c0cb3188731f82144faa72d1"} Jan 31 05:04:25 crc kubenswrapper[4832]: I0131 05:04:25.948551 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.736416854 podStartE2EDuration="7.94851797s" podCreationTimestamp="2026-01-31 05:04:18 +0000 UTC" firstStartedPulling="2026-01-31 05:04:19.750480827 +0000 UTC m=+1268.699302512" lastFinishedPulling="2026-01-31 05:04:23.962581923 +0000 UTC m=+1272.911403628" observedRunningTime="2026-01-31 05:04:24.91207925 +0000 UTC m=+1273.860900945" watchObservedRunningTime="2026-01-31 05:04:25.94851797 +0000 UTC m=+1274.897339685" Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.363011 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.473243 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-config-data\") pod \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.474056 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-scripts\") pod \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.474246 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-combined-ca-bundle\") pod \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.474421 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcf8r\" (UniqueName: \"kubernetes.io/projected/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-kube-api-access-rcf8r\") pod \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\" (UID: \"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1\") " Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.480939 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-kube-api-access-rcf8r" (OuterVolumeSpecName: "kube-api-access-rcf8r") pod "c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1" (UID: "c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1"). InnerVolumeSpecName "kube-api-access-rcf8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.482439 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-scripts" (OuterVolumeSpecName: "scripts") pod "c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1" (UID: "c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.504580 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1" (UID: "c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.510199 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-config-data" (OuterVolumeSpecName: "config-data") pod "c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1" (UID: "c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.577480 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.577519 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.577529 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.577543 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcf8r\" (UniqueName: \"kubernetes.io/projected/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1-kube-api-access-rcf8r\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.947878 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-698zs" event={"ID":"c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1","Type":"ContainerDied","Data":"4f56b4894e180f34bf538add0fcde014765692606decd199744e0bca29e04378"} Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.947949 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f56b4894e180f34bf538add0fcde014765692606decd199744e0bca29e04378" Jan 31 05:04:27 crc kubenswrapper[4832]: I0131 05:04:27.948026 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-698zs" Jan 31 05:04:28 crc kubenswrapper[4832]: I0131 05:04:28.204469 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:04:28 crc kubenswrapper[4832]: I0131 05:04:28.205321 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" containerName="nova-api-log" containerID="cri-o://b4f2a6d65161d5898ba82bfaf3120bbbc44b9ed92599f9c33876de5c8a403102" gracePeriod=30 Jan 31 05:04:28 crc kubenswrapper[4832]: I0131 05:04:28.205474 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" containerName="nova-api-api" containerID="cri-o://29f567b376f0fb19c0028d3fd66deef9e121dc1bfaa01e09056a488de1177461" gracePeriod=30 Jan 31 05:04:28 crc kubenswrapper[4832]: I0131 05:04:28.215908 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:04:28 crc kubenswrapper[4832]: I0131 05:04:28.216162 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="81a32fb3-95af-439e-a95f-f35476a2cc15" containerName="nova-scheduler-scheduler" containerID="cri-o://91daa3ddf19ba16a465ba969ea504c4b70010d3f3efc2e8d1ffa129fcf7a3ff4" gracePeriod=30 Jan 31 05:04:28 crc kubenswrapper[4832]: I0131 05:04:28.375498 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:04:28 crc kubenswrapper[4832]: I0131 05:04:28.376241 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e7983198-c069-49b9-aad7-18fa9e2923df" containerName="nova-metadata-log" containerID="cri-o://d35fa820c2d7ff7316dd790325fc573ff5f45193a6b85ae0a592c072f1f188d4" gracePeriod=30 Jan 31 05:04:28 crc kubenswrapper[4832]: I0131 05:04:28.376412 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e7983198-c069-49b9-aad7-18fa9e2923df" containerName="nova-metadata-metadata" containerID="cri-o://bdcf5b3e67cc08a88be96a740cd87029e3d9ef7e02f9ae8f19161679b1ac6374" gracePeriod=30 Jan 31 05:04:28 crc kubenswrapper[4832]: I0131 05:04:28.961769 4832 generic.go:334] "Generic (PLEG): container finished" podID="e7983198-c069-49b9-aad7-18fa9e2923df" containerID="d35fa820c2d7ff7316dd790325fc573ff5f45193a6b85ae0a592c072f1f188d4" exitCode=143 Jan 31 05:04:28 crc kubenswrapper[4832]: I0131 05:04:28.961844 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7983198-c069-49b9-aad7-18fa9e2923df","Type":"ContainerDied","Data":"d35fa820c2d7ff7316dd790325fc573ff5f45193a6b85ae0a592c072f1f188d4"} Jan 31 05:04:28 crc kubenswrapper[4832]: I0131 05:04:28.967760 4832 generic.go:334] "Generic (PLEG): container finished" podID="7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" containerID="29f567b376f0fb19c0028d3fd66deef9e121dc1bfaa01e09056a488de1177461" exitCode=0 Jan 31 05:04:28 crc kubenswrapper[4832]: I0131 05:04:28.967796 4832 generic.go:334] "Generic (PLEG): container finished" podID="7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" containerID="b4f2a6d65161d5898ba82bfaf3120bbbc44b9ed92599f9c33876de5c8a403102" exitCode=143 Jan 31 05:04:28 crc kubenswrapper[4832]: I0131 05:04:28.967826 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0","Type":"ContainerDied","Data":"29f567b376f0fb19c0028d3fd66deef9e121dc1bfaa01e09056a488de1177461"} Jan 31 05:04:28 crc kubenswrapper[4832]: I0131 05:04:28.967864 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0","Type":"ContainerDied","Data":"b4f2a6d65161d5898ba82bfaf3120bbbc44b9ed92599f9c33876de5c8a403102"} Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.081052 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.223715 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-logs\") pod \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.224187 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-public-tls-certs\") pod \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.224441 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-logs" (OuterVolumeSpecName: "logs") pod "7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" (UID: "7cec60ba-fa92-4bdd-9c03-d535d7cd40c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.225106 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-internal-tls-certs\") pod \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.225329 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9xg6\" (UniqueName: \"kubernetes.io/projected/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-kube-api-access-x9xg6\") pod \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.225367 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-config-data\") pod \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.225419 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-combined-ca-bundle\") pod \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\" (UID: \"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0\") " Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.225901 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.231232 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-kube-api-access-x9xg6" (OuterVolumeSpecName: "kube-api-access-x9xg6") pod "7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" (UID: "7cec60ba-fa92-4bdd-9c03-d535d7cd40c0"). InnerVolumeSpecName "kube-api-access-x9xg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.252195 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-config-data" (OuterVolumeSpecName: "config-data") pod "7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" (UID: "7cec60ba-fa92-4bdd-9c03-d535d7cd40c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.259938 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" (UID: "7cec60ba-fa92-4bdd-9c03-d535d7cd40c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.286624 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" (UID: "7cec60ba-fa92-4bdd-9c03-d535d7cd40c0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.291820 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" (UID: "7cec60ba-fa92-4bdd-9c03-d535d7cd40c0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.327782 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.327823 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9xg6\" (UniqueName: \"kubernetes.io/projected/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-kube-api-access-x9xg6\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.327835 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.327847 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.327857 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:29 crc kubenswrapper[4832]: E0131 05:04:29.707861 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91daa3ddf19ba16a465ba969ea504c4b70010d3f3efc2e8d1ffa129fcf7a3ff4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 05:04:29 crc kubenswrapper[4832]: E0131 05:04:29.710377 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91daa3ddf19ba16a465ba969ea504c4b70010d3f3efc2e8d1ffa129fcf7a3ff4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 05:04:29 crc kubenswrapper[4832]: E0131 05:04:29.712737 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91daa3ddf19ba16a465ba969ea504c4b70010d3f3efc2e8d1ffa129fcf7a3ff4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 05:04:29 crc kubenswrapper[4832]: E0131 05:04:29.712767 4832 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="81a32fb3-95af-439e-a95f-f35476a2cc15" containerName="nova-scheduler-scheduler" Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.979502 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cec60ba-fa92-4bdd-9c03-d535d7cd40c0","Type":"ContainerDied","Data":"0bbc415472db2e3901b82dc842c0642fbc6d03ffe794d0326896885cb8219a93"} Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.979607 4832 scope.go:117] "RemoveContainer" containerID="29f567b376f0fb19c0028d3fd66deef9e121dc1bfaa01e09056a488de1177461" Jan 31 05:04:29 crc kubenswrapper[4832]: I0131 05:04:29.979647 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.010343 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.012889 4832 scope.go:117] "RemoveContainer" containerID="b4f2a6d65161d5898ba82bfaf3120bbbc44b9ed92599f9c33876de5c8a403102" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.019334 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.058368 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 05:04:30 crc kubenswrapper[4832]: E0131 05:04:30.061251 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c28478-7f9b-4d59-8b06-b434b9110244" containerName="dnsmasq-dns" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.066755 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c28478-7f9b-4d59-8b06-b434b9110244" containerName="dnsmasq-dns" Jan 31 05:04:30 crc kubenswrapper[4832]: E0131 05:04:30.066961 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" containerName="nova-api-log" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.067046 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" containerName="nova-api-log" Jan 31 05:04:30 crc kubenswrapper[4832]: E0131 05:04:30.067140 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c28478-7f9b-4d59-8b06-b434b9110244" containerName="init" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.067217 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c28478-7f9b-4d59-8b06-b434b9110244" containerName="init" Jan 31 05:04:30 crc kubenswrapper[4832]: E0131 05:04:30.067289 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1" containerName="nova-manage" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.067349 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1" containerName="nova-manage" Jan 31 05:04:30 crc kubenswrapper[4832]: E0131 05:04:30.067428 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" containerName="nova-api-api" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.067506 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" containerName="nova-api-api" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.068096 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" containerName="nova-api-log" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.068208 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" containerName="nova-api-api" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.069636 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8c28478-7f9b-4d59-8b06-b434b9110244" containerName="dnsmasq-dns" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.069801 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1" containerName="nova-manage" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.076614 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.079113 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.080110 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.081952 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.082999 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.147761 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f25cab2-43da-43e5-9cb7-78112bf8ea08-config-data\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.147878 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f25cab2-43da-43e5-9cb7-78112bf8ea08-logs\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.147911 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8glx\" (UniqueName: \"kubernetes.io/projected/3f25cab2-43da-43e5-9cb7-78112bf8ea08-kube-api-access-g8glx\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.148301 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f25cab2-43da-43e5-9cb7-78112bf8ea08-public-tls-certs\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.148546 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f25cab2-43da-43e5-9cb7-78112bf8ea08-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.148812 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f25cab2-43da-43e5-9cb7-78112bf8ea08-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.251897 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f25cab2-43da-43e5-9cb7-78112bf8ea08-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.252016 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f25cab2-43da-43e5-9cb7-78112bf8ea08-config-data\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.252094 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f25cab2-43da-43e5-9cb7-78112bf8ea08-logs\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.252131 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8glx\" (UniqueName: \"kubernetes.io/projected/3f25cab2-43da-43e5-9cb7-78112bf8ea08-kube-api-access-g8glx\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.252246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f25cab2-43da-43e5-9cb7-78112bf8ea08-public-tls-certs\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.252329 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f25cab2-43da-43e5-9cb7-78112bf8ea08-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.252872 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f25cab2-43da-43e5-9cb7-78112bf8ea08-logs\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.260250 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f25cab2-43da-43e5-9cb7-78112bf8ea08-config-data\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.260302 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f25cab2-43da-43e5-9cb7-78112bf8ea08-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.262172 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f25cab2-43da-43e5-9cb7-78112bf8ea08-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.262173 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f25cab2-43da-43e5-9cb7-78112bf8ea08-public-tls-certs\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.269815 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8glx\" (UniqueName: \"kubernetes.io/projected/3f25cab2-43da-43e5-9cb7-78112bf8ea08-kube-api-access-g8glx\") pod \"nova-api-0\" (UID: \"3f25cab2-43da-43e5-9cb7-78112bf8ea08\") " pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.410043 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.922780 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 05:04:30 crc kubenswrapper[4832]: I0131 05:04:30.994135 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3f25cab2-43da-43e5-9cb7-78112bf8ea08","Type":"ContainerStarted","Data":"cec0653dafa153bb3f8b094e26ba46acb6e7fd8f00e0789b050e307beea4ce5b"} Jan 31 05:04:31 crc kubenswrapper[4832]: I0131 05:04:31.531742 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e7983198-c069-49b9-aad7-18fa9e2923df" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:53630->10.217.0.198:8775: read: connection reset by peer" Jan 31 05:04:31 crc kubenswrapper[4832]: I0131 05:04:31.531863 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e7983198-c069-49b9-aad7-18fa9e2923df" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:53632->10.217.0.198:8775: read: connection reset by peer" Jan 31 05:04:31 crc kubenswrapper[4832]: I0131 05:04:31.876904 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cec60ba-fa92-4bdd-9c03-d535d7cd40c0" path="/var/lib/kubelet/pods/7cec60ba-fa92-4bdd-9c03-d535d7cd40c0/volumes" Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.012332 4832 generic.go:334] "Generic (PLEG): container finished" podID="e7983198-c069-49b9-aad7-18fa9e2923df" containerID="bdcf5b3e67cc08a88be96a740cd87029e3d9ef7e02f9ae8f19161679b1ac6374" exitCode=0 Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.012437 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7983198-c069-49b9-aad7-18fa9e2923df","Type":"ContainerDied","Data":"bdcf5b3e67cc08a88be96a740cd87029e3d9ef7e02f9ae8f19161679b1ac6374"} Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.012512 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7983198-c069-49b9-aad7-18fa9e2923df","Type":"ContainerDied","Data":"d0e21c5e1ff5262d2b99a38204019773fab2d5de4fed477ce49ea36f15df3609"} Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.012539 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0e21c5e1ff5262d2b99a38204019773fab2d5de4fed477ce49ea36f15df3609" Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.014408 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3f25cab2-43da-43e5-9cb7-78112bf8ea08","Type":"ContainerStarted","Data":"6e02e4fdada2a56c9e5706ae4fbab359f00a33aca9b6ed24e9d43e28ab1c117d"} Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.014490 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3f25cab2-43da-43e5-9cb7-78112bf8ea08","Type":"ContainerStarted","Data":"b5dfa89d7d9fda92488b7751d0becf99e36babf126519ceb57e994d81e58ed68"} Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.048017 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.047975149 podStartE2EDuration="2.047975149s" podCreationTimestamp="2026-01-31 05:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:04:32.038626828 +0000 UTC m=+1280.987448533" watchObservedRunningTime="2026-01-31 05:04:32.047975149 +0000 UTC m=+1280.996796834" Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.058919 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.112248 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-nova-metadata-tls-certs\") pod \"e7983198-c069-49b9-aad7-18fa9e2923df\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.112489 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-config-data\") pod \"e7983198-c069-49b9-aad7-18fa9e2923df\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.157880 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-config-data" (OuterVolumeSpecName: "config-data") pod "e7983198-c069-49b9-aad7-18fa9e2923df" (UID: "e7983198-c069-49b9-aad7-18fa9e2923df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.196104 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e7983198-c069-49b9-aad7-18fa9e2923df" (UID: "e7983198-c069-49b9-aad7-18fa9e2923df"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.215212 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgtfk\" (UniqueName: \"kubernetes.io/projected/e7983198-c069-49b9-aad7-18fa9e2923df-kube-api-access-pgtfk\") pod \"e7983198-c069-49b9-aad7-18fa9e2923df\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.215315 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7983198-c069-49b9-aad7-18fa9e2923df-logs\") pod \"e7983198-c069-49b9-aad7-18fa9e2923df\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.215346 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-combined-ca-bundle\") pod \"e7983198-c069-49b9-aad7-18fa9e2923df\" (UID: \"e7983198-c069-49b9-aad7-18fa9e2923df\") " Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.215979 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7983198-c069-49b9-aad7-18fa9e2923df-logs" (OuterVolumeSpecName: "logs") pod "e7983198-c069-49b9-aad7-18fa9e2923df" (UID: "e7983198-c069-49b9-aad7-18fa9e2923df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.216008 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.216025 4832 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.220077 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7983198-c069-49b9-aad7-18fa9e2923df-kube-api-access-pgtfk" (OuterVolumeSpecName: "kube-api-access-pgtfk") pod "e7983198-c069-49b9-aad7-18fa9e2923df" (UID: "e7983198-c069-49b9-aad7-18fa9e2923df"). InnerVolumeSpecName "kube-api-access-pgtfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.252839 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7983198-c069-49b9-aad7-18fa9e2923df" (UID: "e7983198-c069-49b9-aad7-18fa9e2923df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.318544 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgtfk\" (UniqueName: \"kubernetes.io/projected/e7983198-c069-49b9-aad7-18fa9e2923df-kube-api-access-pgtfk\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.318602 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7983198-c069-49b9-aad7-18fa9e2923df-logs\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:32 crc kubenswrapper[4832]: I0131 05:04:32.318613 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7983198-c069-49b9-aad7-18fa9e2923df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.044833 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.092527 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.111721 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.132811 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:04:33 crc kubenswrapper[4832]: E0131 05:04:33.133480 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7983198-c069-49b9-aad7-18fa9e2923df" containerName="nova-metadata-log" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.133503 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7983198-c069-49b9-aad7-18fa9e2923df" containerName="nova-metadata-log" Jan 31 05:04:33 crc kubenswrapper[4832]: E0131 05:04:33.133538 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7983198-c069-49b9-aad7-18fa9e2923df" containerName="nova-metadata-metadata" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.133550 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7983198-c069-49b9-aad7-18fa9e2923df" containerName="nova-metadata-metadata" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.133881 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7983198-c069-49b9-aad7-18fa9e2923df" containerName="nova-metadata-log" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.133930 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7983198-c069-49b9-aad7-18fa9e2923df" containerName="nova-metadata-metadata" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.137515 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.145427 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.145980 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.146594 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.242550 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d7f27d-9408-4f6b-ab25-a0f453cc377e-config-data\") pod \"nova-metadata-0\" (UID: \"83d7f27d-9408-4f6b-ab25-a0f453cc377e\") " pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.242649 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d7f27d-9408-4f6b-ab25-a0f453cc377e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"83d7f27d-9408-4f6b-ab25-a0f453cc377e\") " pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.242695 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/83d7f27d-9408-4f6b-ab25-a0f453cc377e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"83d7f27d-9408-4f6b-ab25-a0f453cc377e\") " pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.242832 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83d7f27d-9408-4f6b-ab25-a0f453cc377e-logs\") pod \"nova-metadata-0\" (UID: \"83d7f27d-9408-4f6b-ab25-a0f453cc377e\") " pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.242894 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb64l\" (UniqueName: \"kubernetes.io/projected/83d7f27d-9408-4f6b-ab25-a0f453cc377e-kube-api-access-sb64l\") pod \"nova-metadata-0\" (UID: \"83d7f27d-9408-4f6b-ab25-a0f453cc377e\") " pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.345337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d7f27d-9408-4f6b-ab25-a0f453cc377e-config-data\") pod \"nova-metadata-0\" (UID: \"83d7f27d-9408-4f6b-ab25-a0f453cc377e\") " pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.345400 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d7f27d-9408-4f6b-ab25-a0f453cc377e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"83d7f27d-9408-4f6b-ab25-a0f453cc377e\") " pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.345444 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/83d7f27d-9408-4f6b-ab25-a0f453cc377e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"83d7f27d-9408-4f6b-ab25-a0f453cc377e\") " pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.345607 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83d7f27d-9408-4f6b-ab25-a0f453cc377e-logs\") pod \"nova-metadata-0\" (UID: \"83d7f27d-9408-4f6b-ab25-a0f453cc377e\") " pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.345670 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb64l\" (UniqueName: \"kubernetes.io/projected/83d7f27d-9408-4f6b-ab25-a0f453cc377e-kube-api-access-sb64l\") pod \"nova-metadata-0\" (UID: \"83d7f27d-9408-4f6b-ab25-a0f453cc377e\") " pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.346286 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83d7f27d-9408-4f6b-ab25-a0f453cc377e-logs\") pod \"nova-metadata-0\" (UID: \"83d7f27d-9408-4f6b-ab25-a0f453cc377e\") " pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.351921 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/83d7f27d-9408-4f6b-ab25-a0f453cc377e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"83d7f27d-9408-4f6b-ab25-a0f453cc377e\") " pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.352235 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d7f27d-9408-4f6b-ab25-a0f453cc377e-config-data\") pod \"nova-metadata-0\" (UID: \"83d7f27d-9408-4f6b-ab25-a0f453cc377e\") " pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.362715 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d7f27d-9408-4f6b-ab25-a0f453cc377e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"83d7f27d-9408-4f6b-ab25-a0f453cc377e\") " pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.391606 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb64l\" (UniqueName: \"kubernetes.io/projected/83d7f27d-9408-4f6b-ab25-a0f453cc377e-kube-api-access-sb64l\") pod \"nova-metadata-0\" (UID: \"83d7f27d-9408-4f6b-ab25-a0f453cc377e\") " pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.458417 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.872763 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7983198-c069-49b9-aad7-18fa9e2923df" path="/var/lib/kubelet/pods/e7983198-c069-49b9-aad7-18fa9e2923df/volumes" Jan 31 05:04:33 crc kubenswrapper[4832]: I0131 05:04:33.983049 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 05:04:34 crc kubenswrapper[4832]: I0131 05:04:34.057044 4832 generic.go:334] "Generic (PLEG): container finished" podID="81a32fb3-95af-439e-a95f-f35476a2cc15" containerID="91daa3ddf19ba16a465ba969ea504c4b70010d3f3efc2e8d1ffa129fcf7a3ff4" exitCode=0 Jan 31 05:04:34 crc kubenswrapper[4832]: I0131 05:04:34.057221 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"81a32fb3-95af-439e-a95f-f35476a2cc15","Type":"ContainerDied","Data":"91daa3ddf19ba16a465ba969ea504c4b70010d3f3efc2e8d1ffa129fcf7a3ff4"} Jan 31 05:04:34 crc kubenswrapper[4832]: I0131 05:04:34.060455 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"83d7f27d-9408-4f6b-ab25-a0f453cc377e","Type":"ContainerStarted","Data":"b52dc59d37642cd9ec754f2ee4a2bed76f6e3a90218edd0ff366b4fec7f92fde"} Jan 31 05:04:34 crc kubenswrapper[4832]: I0131 05:04:34.105416 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:04:34 crc kubenswrapper[4832]: I0131 05:04:34.165610 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-952vm\" (UniqueName: \"kubernetes.io/projected/81a32fb3-95af-439e-a95f-f35476a2cc15-kube-api-access-952vm\") pod \"81a32fb3-95af-439e-a95f-f35476a2cc15\" (UID: \"81a32fb3-95af-439e-a95f-f35476a2cc15\") " Jan 31 05:04:34 crc kubenswrapper[4832]: I0131 05:04:34.165726 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a32fb3-95af-439e-a95f-f35476a2cc15-combined-ca-bundle\") pod \"81a32fb3-95af-439e-a95f-f35476a2cc15\" (UID: \"81a32fb3-95af-439e-a95f-f35476a2cc15\") " Jan 31 05:04:34 crc kubenswrapper[4832]: I0131 05:04:34.165761 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81a32fb3-95af-439e-a95f-f35476a2cc15-config-data\") pod \"81a32fb3-95af-439e-a95f-f35476a2cc15\" (UID: \"81a32fb3-95af-439e-a95f-f35476a2cc15\") " Jan 31 05:04:34 crc kubenswrapper[4832]: I0131 05:04:34.189900 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a32fb3-95af-439e-a95f-f35476a2cc15-kube-api-access-952vm" (OuterVolumeSpecName: "kube-api-access-952vm") pod "81a32fb3-95af-439e-a95f-f35476a2cc15" (UID: "81a32fb3-95af-439e-a95f-f35476a2cc15"). InnerVolumeSpecName "kube-api-access-952vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:04:34 crc kubenswrapper[4832]: I0131 05:04:34.224277 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a32fb3-95af-439e-a95f-f35476a2cc15-config-data" (OuterVolumeSpecName: "config-data") pod "81a32fb3-95af-439e-a95f-f35476a2cc15" (UID: "81a32fb3-95af-439e-a95f-f35476a2cc15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:34 crc kubenswrapper[4832]: I0131 05:04:34.239862 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a32fb3-95af-439e-a95f-f35476a2cc15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81a32fb3-95af-439e-a95f-f35476a2cc15" (UID: "81a32fb3-95af-439e-a95f-f35476a2cc15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:04:34 crc kubenswrapper[4832]: I0131 05:04:34.268713 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-952vm\" (UniqueName: \"kubernetes.io/projected/81a32fb3-95af-439e-a95f-f35476a2cc15-kube-api-access-952vm\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:34 crc kubenswrapper[4832]: I0131 05:04:34.268749 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a32fb3-95af-439e-a95f-f35476a2cc15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:34 crc kubenswrapper[4832]: I0131 05:04:34.268763 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81a32fb3-95af-439e-a95f-f35476a2cc15-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.072648 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"81a32fb3-95af-439e-a95f-f35476a2cc15","Type":"ContainerDied","Data":"128da660b9523c8f173d597630684d00850a0b084557cc5ade84097c7d43cfe1"} Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.073235 4832 scope.go:117] "RemoveContainer" containerID="91daa3ddf19ba16a465ba969ea504c4b70010d3f3efc2e8d1ffa129fcf7a3ff4" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.072722 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.078479 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"83d7f27d-9408-4f6b-ab25-a0f453cc377e","Type":"ContainerStarted","Data":"92d7a8cba6180d57b4a2c9794fe475459c1709554f58d8aa77cecc480e60b98b"} Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.078590 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"83d7f27d-9408-4f6b-ab25-a0f453cc377e","Type":"ContainerStarted","Data":"c885f877e02665df44c59c0e8f28d2d0ceb90428e6bf02e6180ba7c89c867c03"} Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.110925 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.110897848 podStartE2EDuration="2.110897848s" podCreationTimestamp="2026-01-31 05:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:04:35.101280149 +0000 UTC m=+1284.050101834" watchObservedRunningTime="2026-01-31 05:04:35.110897848 +0000 UTC m=+1284.059719533" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.129006 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.148253 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.160835 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:04:35 crc kubenswrapper[4832]: E0131 05:04:35.161343 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81a32fb3-95af-439e-a95f-f35476a2cc15" containerName="nova-scheduler-scheduler" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.161365 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a32fb3-95af-439e-a95f-f35476a2cc15" containerName="nova-scheduler-scheduler" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.161584 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="81a32fb3-95af-439e-a95f-f35476a2cc15" containerName="nova-scheduler-scheduler" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.162388 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.167018 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.175953 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.298362 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97fe0a4-c76d-439e-a096-460328d1d9d4-config-data\") pod \"nova-scheduler-0\" (UID: \"e97fe0a4-c76d-439e-a096-460328d1d9d4\") " pod="openstack/nova-scheduler-0" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.298587 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d28p\" (UniqueName: \"kubernetes.io/projected/e97fe0a4-c76d-439e-a096-460328d1d9d4-kube-api-access-2d28p\") pod \"nova-scheduler-0\" (UID: \"e97fe0a4-c76d-439e-a096-460328d1d9d4\") " pod="openstack/nova-scheduler-0" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.298816 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97fe0a4-c76d-439e-a096-460328d1d9d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e97fe0a4-c76d-439e-a096-460328d1d9d4\") " pod="openstack/nova-scheduler-0" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.401268 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97fe0a4-c76d-439e-a096-460328d1d9d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e97fe0a4-c76d-439e-a096-460328d1d9d4\") " pod="openstack/nova-scheduler-0" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.401407 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97fe0a4-c76d-439e-a096-460328d1d9d4-config-data\") pod \"nova-scheduler-0\" (UID: \"e97fe0a4-c76d-439e-a096-460328d1d9d4\") " pod="openstack/nova-scheduler-0" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.401613 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d28p\" (UniqueName: \"kubernetes.io/projected/e97fe0a4-c76d-439e-a096-460328d1d9d4-kube-api-access-2d28p\") pod \"nova-scheduler-0\" (UID: \"e97fe0a4-c76d-439e-a096-460328d1d9d4\") " pod="openstack/nova-scheduler-0" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.411267 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97fe0a4-c76d-439e-a096-460328d1d9d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e97fe0a4-c76d-439e-a096-460328d1d9d4\") " pod="openstack/nova-scheduler-0" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.421037 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97fe0a4-c76d-439e-a096-460328d1d9d4-config-data\") pod \"nova-scheduler-0\" (UID: \"e97fe0a4-c76d-439e-a096-460328d1d9d4\") " pod="openstack/nova-scheduler-0" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.429037 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d28p\" (UniqueName: \"kubernetes.io/projected/e97fe0a4-c76d-439e-a096-460328d1d9d4-kube-api-access-2d28p\") pod \"nova-scheduler-0\" (UID: \"e97fe0a4-c76d-439e-a096-460328d1d9d4\") " pod="openstack/nova-scheduler-0" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.488529 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 05:04:35 crc kubenswrapper[4832]: I0131 05:04:35.872056 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81a32fb3-95af-439e-a95f-f35476a2cc15" path="/var/lib/kubelet/pods/81a32fb3-95af-439e-a95f-f35476a2cc15/volumes" Jan 31 05:04:36 crc kubenswrapper[4832]: I0131 05:04:36.047952 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 05:04:36 crc kubenswrapper[4832]: I0131 05:04:36.094977 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e97fe0a4-c76d-439e-a096-460328d1d9d4","Type":"ContainerStarted","Data":"8437cc54c991d2b2adaf09ae52adc9c8e0ba9f1c65ccd6962d909cbbf770d5f2"} Jan 31 05:04:37 crc kubenswrapper[4832]: I0131 05:04:37.109921 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e97fe0a4-c76d-439e-a096-460328d1d9d4","Type":"ContainerStarted","Data":"06fcc56a5eee466be8b029938b30d061582d0ddd1605fa038e15075b3ae4426c"} Jan 31 05:04:37 crc kubenswrapper[4832]: I0131 05:04:37.136865 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.136834169 podStartE2EDuration="2.136834169s" podCreationTimestamp="2026-01-31 05:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:04:37.136403076 +0000 UTC m=+1286.085224771" watchObservedRunningTime="2026-01-31 05:04:37.136834169 +0000 UTC m=+1286.085655884" Jan 31 05:04:38 crc kubenswrapper[4832]: I0131 05:04:38.458634 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 05:04:38 crc kubenswrapper[4832]: I0131 05:04:38.459148 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 05:04:40 crc kubenswrapper[4832]: I0131 05:04:40.411215 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 05:04:40 crc kubenswrapper[4832]: I0131 05:04:40.411327 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 05:04:40 crc kubenswrapper[4832]: I0131 05:04:40.489294 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 05:04:41 crc kubenswrapper[4832]: I0131 05:04:41.460931 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3f25cab2-43da-43e5-9cb7-78112bf8ea08" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 05:04:41 crc kubenswrapper[4832]: I0131 05:04:41.460975 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3f25cab2-43da-43e5-9cb7-78112bf8ea08" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 05:04:43 crc kubenswrapper[4832]: I0131 05:04:43.459192 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 05:04:43 crc kubenswrapper[4832]: I0131 05:04:43.459729 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 05:04:44 crc kubenswrapper[4832]: I0131 05:04:44.475966 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="83d7f27d-9408-4f6b-ab25-a0f453cc377e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 05:04:44 crc kubenswrapper[4832]: I0131 05:04:44.475985 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="83d7f27d-9408-4f6b-ab25-a0f453cc377e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 05:04:45 crc kubenswrapper[4832]: I0131 05:04:45.489926 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 05:04:45 crc kubenswrapper[4832]: I0131 05:04:45.545655 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 05:04:46 crc kubenswrapper[4832]: I0131 05:04:46.291634 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 05:04:48 crc kubenswrapper[4832]: I0131 05:04:48.539919 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:04:48 crc kubenswrapper[4832]: I0131 05:04:48.540468 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:04:49 crc kubenswrapper[4832]: I0131 05:04:49.178741 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 05:04:50 crc kubenswrapper[4832]: I0131 05:04:50.423757 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 05:04:50 crc kubenswrapper[4832]: I0131 05:04:50.426670 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 05:04:50 crc kubenswrapper[4832]: I0131 05:04:50.427847 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 05:04:50 crc kubenswrapper[4832]: I0131 05:04:50.440518 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 05:04:51 crc kubenswrapper[4832]: I0131 05:04:51.291968 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 05:04:51 crc kubenswrapper[4832]: I0131 05:04:51.299210 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 05:04:53 crc kubenswrapper[4832]: I0131 05:04:53.472330 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 05:04:53 crc kubenswrapper[4832]: I0131 05:04:53.473402 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 05:04:53 crc kubenswrapper[4832]: I0131 05:04:53.480582 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 05:04:54 crc kubenswrapper[4832]: I0131 05:04:54.332860 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 05:05:02 crc kubenswrapper[4832]: I0131 05:05:02.398598 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 05:05:03 crc kubenswrapper[4832]: I0131 05:05:03.980764 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 05:05:07 crc kubenswrapper[4832]: I0131 05:05:07.862499 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9a976894-0f59-4fb5-a297-c43c1bf88b47" containerName="rabbitmq" containerID="cri-o://0e3ac977bfefc33d15a981fe0465e7e596f0d264d9f34edb665181d1e1c1c76f" gracePeriod=604795 Jan 31 05:05:08 crc kubenswrapper[4832]: I0131 05:05:08.998580 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e4e0df3a-5b8c-43ad-b404-5a9716f774a6" containerName="rabbitmq" containerID="cri-o://17182016bd86d61e0916a20265f1e216a103b5db4fd0669e3e35dc5bb43ad9f3" gracePeriod=604795 Jan 31 05:05:13 crc kubenswrapper[4832]: I0131 05:05:13.439620 4832 scope.go:117] "RemoveContainer" containerID="7595e34e5d86ad042aeb44a629f85d59fba952656d7998797826b6fdb4be2d39" Jan 31 05:05:13 crc kubenswrapper[4832]: I0131 05:05:13.501486 4832 scope.go:117] "RemoveContainer" containerID="2cb1b1e7af0aa16612f95c90c2e4e87c5fe8de5dc2df54e7208bd8f572dca2a7" Jan 31 05:05:13 crc kubenswrapper[4832]: I0131 05:05:13.572038 4832 scope.go:117] "RemoveContainer" containerID="f8dfacbb1ab33e2d670e178b61af90f7b99dc0a8fa665a11368e1f080ea863a2" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.567005 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.623943 4832 generic.go:334] "Generic (PLEG): container finished" podID="9a976894-0f59-4fb5-a297-c43c1bf88b47" containerID="0e3ac977bfefc33d15a981fe0465e7e596f0d264d9f34edb665181d1e1c1c76f" exitCode=0 Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.624033 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9a976894-0f59-4fb5-a297-c43c1bf88b47","Type":"ContainerDied","Data":"0e3ac977bfefc33d15a981fe0465e7e596f0d264d9f34edb665181d1e1c1c76f"} Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.624084 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9a976894-0f59-4fb5-a297-c43c1bf88b47","Type":"ContainerDied","Data":"82fe0d11acd6a4abda4ea2049e58853aeb63be2118775cfee3a83dc3ac4e37a8"} Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.624118 4832 scope.go:117] "RemoveContainer" containerID="0e3ac977bfefc33d15a981fe0465e7e596f0d264d9f34edb665181d1e1c1c76f" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.624823 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.654257 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hx9s\" (UniqueName: \"kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-kube-api-access-8hx9s\") pod \"9a976894-0f59-4fb5-a297-c43c1bf88b47\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.654351 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9a976894-0f59-4fb5-a297-c43c1bf88b47-erlang-cookie-secret\") pod \"9a976894-0f59-4fb5-a297-c43c1bf88b47\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.654386 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-config-data\") pod \"9a976894-0f59-4fb5-a297-c43c1bf88b47\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.654421 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-plugins-conf\") pod \"9a976894-0f59-4fb5-a297-c43c1bf88b47\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.654464 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-plugins\") pod \"9a976894-0f59-4fb5-a297-c43c1bf88b47\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.654514 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-erlang-cookie\") pod \"9a976894-0f59-4fb5-a297-c43c1bf88b47\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.654587 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"9a976894-0f59-4fb5-a297-c43c1bf88b47\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.654621 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-server-conf\") pod \"9a976894-0f59-4fb5-a297-c43c1bf88b47\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.654668 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-tls\") pod \"9a976894-0f59-4fb5-a297-c43c1bf88b47\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.654737 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9a976894-0f59-4fb5-a297-c43c1bf88b47-pod-info\") pod \"9a976894-0f59-4fb5-a297-c43c1bf88b47\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.654797 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-confd\") pod \"9a976894-0f59-4fb5-a297-c43c1bf88b47\" (UID: \"9a976894-0f59-4fb5-a297-c43c1bf88b47\") " Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.659828 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9a976894-0f59-4fb5-a297-c43c1bf88b47" (UID: "9a976894-0f59-4fb5-a297-c43c1bf88b47"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.662048 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9a976894-0f59-4fb5-a297-c43c1bf88b47" (UID: "9a976894-0f59-4fb5-a297-c43c1bf88b47"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.666998 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a976894-0f59-4fb5-a297-c43c1bf88b47-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9a976894-0f59-4fb5-a297-c43c1bf88b47" (UID: "9a976894-0f59-4fb5-a297-c43c1bf88b47"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.668258 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9a976894-0f59-4fb5-a297-c43c1bf88b47" (UID: "9a976894-0f59-4fb5-a297-c43c1bf88b47"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.691033 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "9a976894-0f59-4fb5-a297-c43c1bf88b47" (UID: "9a976894-0f59-4fb5-a297-c43c1bf88b47"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.691288 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9a976894-0f59-4fb5-a297-c43c1bf88b47" (UID: "9a976894-0f59-4fb5-a297-c43c1bf88b47"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.695218 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-kube-api-access-8hx9s" (OuterVolumeSpecName: "kube-api-access-8hx9s") pod "9a976894-0f59-4fb5-a297-c43c1bf88b47" (UID: "9a976894-0f59-4fb5-a297-c43c1bf88b47"). InnerVolumeSpecName "kube-api-access-8hx9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.713863 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9a976894-0f59-4fb5-a297-c43c1bf88b47-pod-info" (OuterVolumeSpecName: "pod-info") pod "9a976894-0f59-4fb5-a297-c43c1bf88b47" (UID: "9a976894-0f59-4fb5-a297-c43c1bf88b47"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.730857 4832 scope.go:117] "RemoveContainer" containerID="45e8607d87db8649c987ea7771c353626689748228529fc17140f44f17fa1ea2" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.763739 4832 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9a976894-0f59-4fb5-a297-c43c1bf88b47-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.763779 4832 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.763791 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.763804 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.763843 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.763854 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.763863 4832 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9a976894-0f59-4fb5-a297-c43c1bf88b47-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.763873 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hx9s\" (UniqueName: \"kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-kube-api-access-8hx9s\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.764858 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-config-data" (OuterVolumeSpecName: "config-data") pod "9a976894-0f59-4fb5-a297-c43c1bf88b47" (UID: "9a976894-0f59-4fb5-a297-c43c1bf88b47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.776098 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-server-conf" (OuterVolumeSpecName: "server-conf") pod "9a976894-0f59-4fb5-a297-c43c1bf88b47" (UID: "9a976894-0f59-4fb5-a297-c43c1bf88b47"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.816525 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.865954 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.866272 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.866355 4832 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9a976894-0f59-4fb5-a297-c43c1bf88b47-server-conf\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.874294 4832 scope.go:117] "RemoveContainer" containerID="0e3ac977bfefc33d15a981fe0465e7e596f0d264d9f34edb665181d1e1c1c76f" Jan 31 05:05:14 crc kubenswrapper[4832]: E0131 05:05:14.874799 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e3ac977bfefc33d15a981fe0465e7e596f0d264d9f34edb665181d1e1c1c76f\": container with ID starting with 0e3ac977bfefc33d15a981fe0465e7e596f0d264d9f34edb665181d1e1c1c76f not found: ID does not exist" containerID="0e3ac977bfefc33d15a981fe0465e7e596f0d264d9f34edb665181d1e1c1c76f" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.874830 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e3ac977bfefc33d15a981fe0465e7e596f0d264d9f34edb665181d1e1c1c76f"} err="failed to get container status \"0e3ac977bfefc33d15a981fe0465e7e596f0d264d9f34edb665181d1e1c1c76f\": rpc error: code = NotFound desc = could not find container \"0e3ac977bfefc33d15a981fe0465e7e596f0d264d9f34edb665181d1e1c1c76f\": container with ID starting with 0e3ac977bfefc33d15a981fe0465e7e596f0d264d9f34edb665181d1e1c1c76f not found: ID does not exist" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.874854 4832 scope.go:117] "RemoveContainer" containerID="45e8607d87db8649c987ea7771c353626689748228529fc17140f44f17fa1ea2" Jan 31 05:05:14 crc kubenswrapper[4832]: E0131 05:05:14.875248 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e8607d87db8649c987ea7771c353626689748228529fc17140f44f17fa1ea2\": container with ID starting with 45e8607d87db8649c987ea7771c353626689748228529fc17140f44f17fa1ea2 not found: ID does not exist" containerID="45e8607d87db8649c987ea7771c353626689748228529fc17140f44f17fa1ea2" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.875273 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e8607d87db8649c987ea7771c353626689748228529fc17140f44f17fa1ea2"} err="failed to get container status \"45e8607d87db8649c987ea7771c353626689748228529fc17140f44f17fa1ea2\": rpc error: code = NotFound desc = could not find container \"45e8607d87db8649c987ea7771c353626689748228529fc17140f44f17fa1ea2\": container with ID starting with 45e8607d87db8649c987ea7771c353626689748228529fc17140f44f17fa1ea2 not found: ID does not exist" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.887693 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9a976894-0f59-4fb5-a297-c43c1bf88b47" (UID: "9a976894-0f59-4fb5-a297-c43c1bf88b47"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.968242 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9a976894-0f59-4fb5-a297-c43c1bf88b47-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.968607 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 05:05:14 crc kubenswrapper[4832]: I0131 05:05:14.975626 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.015053 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 05:05:15 crc kubenswrapper[4832]: E0131 05:05:15.015532 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a976894-0f59-4fb5-a297-c43c1bf88b47" containerName="rabbitmq" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.015551 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a976894-0f59-4fb5-a297-c43c1bf88b47" containerName="rabbitmq" Jan 31 05:05:15 crc kubenswrapper[4832]: E0131 05:05:15.015587 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a976894-0f59-4fb5-a297-c43c1bf88b47" containerName="setup-container" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.015596 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a976894-0f59-4fb5-a297-c43c1bf88b47" containerName="setup-container" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.015787 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a976894-0f59-4fb5-a297-c43c1bf88b47" containerName="rabbitmq" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.016904 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.021440 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.021830 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.021969 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-dg6kd" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.022079 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.022197 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.022426 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.022668 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.028982 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.172371 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.172418 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f59551b3-d149-4bf1-90e2-428e0615f1ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.172455 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f59551b3-d149-4bf1-90e2-428e0615f1ce-config-data\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.172498 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f59551b3-d149-4bf1-90e2-428e0615f1ce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.172520 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f59551b3-d149-4bf1-90e2-428e0615f1ce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.172551 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f59551b3-d149-4bf1-90e2-428e0615f1ce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.172665 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f59551b3-d149-4bf1-90e2-428e0615f1ce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.172731 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f59551b3-d149-4bf1-90e2-428e0615f1ce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.172766 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f59551b3-d149-4bf1-90e2-428e0615f1ce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.172787 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f59551b3-d149-4bf1-90e2-428e0615f1ce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.172808 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl95r\" (UniqueName: \"kubernetes.io/projected/f59551b3-d149-4bf1-90e2-428e0615f1ce-kube-api-access-hl95r\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.277980 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f59551b3-d149-4bf1-90e2-428e0615f1ce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.278040 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f59551b3-d149-4bf1-90e2-428e0615f1ce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.278065 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl95r\" (UniqueName: \"kubernetes.io/projected/f59551b3-d149-4bf1-90e2-428e0615f1ce-kube-api-access-hl95r\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.278110 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.278129 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f59551b3-d149-4bf1-90e2-428e0615f1ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.278155 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f59551b3-d149-4bf1-90e2-428e0615f1ce-config-data\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.278193 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f59551b3-d149-4bf1-90e2-428e0615f1ce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.278213 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f59551b3-d149-4bf1-90e2-428e0615f1ce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.278245 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f59551b3-d149-4bf1-90e2-428e0615f1ce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.278262 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f59551b3-d149-4bf1-90e2-428e0615f1ce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.278328 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f59551b3-d149-4bf1-90e2-428e0615f1ce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.278494 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f59551b3-d149-4bf1-90e2-428e0615f1ce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.280133 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.280400 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f59551b3-d149-4bf1-90e2-428e0615f1ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.280771 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f59551b3-d149-4bf1-90e2-428e0615f1ce-config-data\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.281975 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f59551b3-d149-4bf1-90e2-428e0615f1ce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.292315 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f59551b3-d149-4bf1-90e2-428e0615f1ce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.293211 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f59551b3-d149-4bf1-90e2-428e0615f1ce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.297686 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f59551b3-d149-4bf1-90e2-428e0615f1ce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.300352 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f59551b3-d149-4bf1-90e2-428e0615f1ce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.320342 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f59551b3-d149-4bf1-90e2-428e0615f1ce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.325456 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl95r\" (UniqueName: \"kubernetes.io/projected/f59551b3-d149-4bf1-90e2-428e0615f1ce-kube-api-access-hl95r\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.342202 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f59551b3-d149-4bf1-90e2-428e0615f1ce\") " pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.369026 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.660771 4832 generic.go:334] "Generic (PLEG): container finished" podID="e4e0df3a-5b8c-43ad-b404-5a9716f774a6" containerID="17182016bd86d61e0916a20265f1e216a103b5db4fd0669e3e35dc5bb43ad9f3" exitCode=0 Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.660943 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e4e0df3a-5b8c-43ad-b404-5a9716f774a6","Type":"ContainerDied","Data":"17182016bd86d61e0916a20265f1e216a103b5db4fd0669e3e35dc5bb43ad9f3"} Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.807877 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.876637 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a976894-0f59-4fb5-a297-c43c1bf88b47" path="/var/lib/kubelet/pods/9a976894-0f59-4fb5-a297-c43c1bf88b47/volumes" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.901725 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-plugins-conf\") pod \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.901777 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-erlang-cookie\") pod \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.901842 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-tls\") pod \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.901884 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-confd\") pod \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.901911 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-config-data\") pod \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.901958 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-plugins\") pod \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.901997 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-erlang-cookie-secret\") pod \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.902048 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-server-conf\") pod \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.902083 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.902302 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-pod-info\") pod \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.902333 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcc8n\" (UniqueName: \"kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-kube-api-access-tcc8n\") pod \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\" (UID: \"e4e0df3a-5b8c-43ad-b404-5a9716f774a6\") " Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.905473 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e4e0df3a-5b8c-43ad-b404-5a9716f774a6" (UID: "e4e0df3a-5b8c-43ad-b404-5a9716f774a6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.905592 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e4e0df3a-5b8c-43ad-b404-5a9716f774a6" (UID: "e4e0df3a-5b8c-43ad-b404-5a9716f774a6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.915246 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e4e0df3a-5b8c-43ad-b404-5a9716f774a6" (UID: "e4e0df3a-5b8c-43ad-b404-5a9716f774a6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.918751 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e4e0df3a-5b8c-43ad-b404-5a9716f774a6" (UID: "e4e0df3a-5b8c-43ad-b404-5a9716f774a6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.922943 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e4e0df3a-5b8c-43ad-b404-5a9716f774a6" (UID: "e4e0df3a-5b8c-43ad-b404-5a9716f774a6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.923138 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-kube-api-access-tcc8n" (OuterVolumeSpecName: "kube-api-access-tcc8n") pod "e4e0df3a-5b8c-43ad-b404-5a9716f774a6" (UID: "e4e0df3a-5b8c-43ad-b404-5a9716f774a6"). InnerVolumeSpecName "kube-api-access-tcc8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.926850 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "e4e0df3a-5b8c-43ad-b404-5a9716f774a6" (UID: "e4e0df3a-5b8c-43ad-b404-5a9716f774a6"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.934206 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-pod-info" (OuterVolumeSpecName: "pod-info") pod "e4e0df3a-5b8c-43ad-b404-5a9716f774a6" (UID: "e4e0df3a-5b8c-43ad-b404-5a9716f774a6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 05:05:15 crc kubenswrapper[4832]: I0131 05:05:15.963421 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-config-data" (OuterVolumeSpecName: "config-data") pod "e4e0df3a-5b8c-43ad-b404-5a9716f774a6" (UID: "e4e0df3a-5b8c-43ad-b404-5a9716f774a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.007268 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-server-conf" (OuterVolumeSpecName: "server-conf") pod "e4e0df3a-5b8c-43ad-b404-5a9716f774a6" (UID: "e4e0df3a-5b8c-43ad-b404-5a9716f774a6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.007383 4832 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.007794 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcc8n\" (UniqueName: \"kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-kube-api-access-tcc8n\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.007810 4832 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.007822 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.007835 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.007845 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.007856 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.007866 4832 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.007892 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.016359 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 05:05:16 crc kubenswrapper[4832]: W0131 05:05:16.023467 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf59551b3_d149_4bf1_90e2_428e0615f1ce.slice/crio-ee8aa19016c6b4f33e8639bf44f2bfa4a306469cd9857ac0b98587096eeaca3f WatchSource:0}: Error finding container ee8aa19016c6b4f33e8639bf44f2bfa4a306469cd9857ac0b98587096eeaca3f: Status 404 returned error can't find the container with id ee8aa19016c6b4f33e8639bf44f2bfa4a306469cd9857ac0b98587096eeaca3f Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.033674 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.096021 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e4e0df3a-5b8c-43ad-b404-5a9716f774a6" (UID: "e4e0df3a-5b8c-43ad-b404-5a9716f774a6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.109664 4832 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-server-conf\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.109710 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.109724 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e4e0df3a-5b8c-43ad-b404-5a9716f774a6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.682452 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f59551b3-d149-4bf1-90e2-428e0615f1ce","Type":"ContainerStarted","Data":"ee8aa19016c6b4f33e8639bf44f2bfa4a306469cd9857ac0b98587096eeaca3f"} Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.685643 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e4e0df3a-5b8c-43ad-b404-5a9716f774a6","Type":"ContainerDied","Data":"93e16f1eabd9e2d6fca7a4b9ea3cbb19bd6a35f81c3883d006f5e0cf6eba25f6"} Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.685696 4832 scope.go:117] "RemoveContainer" containerID="17182016bd86d61e0916a20265f1e216a103b5db4fd0669e3e35dc5bb43ad9f3" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.685756 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.709980 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-k7dnr"] Jan 31 05:05:16 crc kubenswrapper[4832]: E0131 05:05:16.711134 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e0df3a-5b8c-43ad-b404-5a9716f774a6" containerName="setup-container" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.711162 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e0df3a-5b8c-43ad-b404-5a9716f774a6" containerName="setup-container" Jan 31 05:05:16 crc kubenswrapper[4832]: E0131 05:05:16.711200 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e0df3a-5b8c-43ad-b404-5a9716f774a6" containerName="rabbitmq" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.711210 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e0df3a-5b8c-43ad-b404-5a9716f774a6" containerName="rabbitmq" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.711624 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4e0df3a-5b8c-43ad-b404-5a9716f774a6" containerName="rabbitmq" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.720066 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.728518 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.753334 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.753577 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.753733 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-dns-svc\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.753773 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbnlz\" (UniqueName: \"kubernetes.io/projected/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-kube-api-access-hbnlz\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.753821 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.753887 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-config\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.757796 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.760280 4832 scope.go:117] "RemoveContainer" containerID="4f6ee4af033e0bef4d17d30f09f5bca28c73e1fe4562858c0fe0a7a7c45afa50" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.785241 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.823124 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.835552 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-k7dnr"] Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.861820 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.861970 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.862036 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-dns-svc\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.862067 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbnlz\" (UniqueName: \"kubernetes.io/projected/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-kube-api-access-hbnlz\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.862099 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.862148 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-config\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.862184 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.863366 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.865706 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.866597 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.866675 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-dns-svc\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.867409 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.867495 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-config\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.873644 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.876072 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.880940 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.880982 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.881202 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.881388 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.881507 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.881657 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qxj4b" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.881836 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.884031 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.890860 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbnlz\" (UniqueName: \"kubernetes.io/projected/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-kube-api-access-hbnlz\") pod \"dnsmasq-dns-d558885bc-k7dnr\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.966723 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f8cc578f-3827-4100-aa82-e6cf59602353-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.966776 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f8cc578f-3827-4100-aa82-e6cf59602353-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.966800 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f8cc578f-3827-4100-aa82-e6cf59602353-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.966888 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f8cc578f-3827-4100-aa82-e6cf59602353-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.966933 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f8cc578f-3827-4100-aa82-e6cf59602353-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.966953 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f8cc578f-3827-4100-aa82-e6cf59602353-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.967023 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f8cc578f-3827-4100-aa82-e6cf59602353-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.967044 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.967072 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f8cc578f-3827-4100-aa82-e6cf59602353-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.967089 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f8cc578f-3827-4100-aa82-e6cf59602353-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:16 crc kubenswrapper[4832]: I0131 05:05:16.967135 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvdwj\" (UniqueName: \"kubernetes.io/projected/f8cc578f-3827-4100-aa82-e6cf59602353-kube-api-access-tvdwj\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.068686 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f8cc578f-3827-4100-aa82-e6cf59602353-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.068743 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f8cc578f-3827-4100-aa82-e6cf59602353-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.068815 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f8cc578f-3827-4100-aa82-e6cf59602353-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.068856 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f8cc578f-3827-4100-aa82-e6cf59602353-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.068882 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f8cc578f-3827-4100-aa82-e6cf59602353-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.068950 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f8cc578f-3827-4100-aa82-e6cf59602353-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.069002 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.069049 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f8cc578f-3827-4100-aa82-e6cf59602353-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.069078 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f8cc578f-3827-4100-aa82-e6cf59602353-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.069114 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvdwj\" (UniqueName: \"kubernetes.io/projected/f8cc578f-3827-4100-aa82-e6cf59602353-kube-api-access-tvdwj\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.069151 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f8cc578f-3827-4100-aa82-e6cf59602353-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.069705 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f8cc578f-3827-4100-aa82-e6cf59602353-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.070767 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.070916 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f8cc578f-3827-4100-aa82-e6cf59602353-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.071553 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f8cc578f-3827-4100-aa82-e6cf59602353-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.072268 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f8cc578f-3827-4100-aa82-e6cf59602353-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.073192 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f8cc578f-3827-4100-aa82-e6cf59602353-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.074197 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f8cc578f-3827-4100-aa82-e6cf59602353-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.074349 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f8cc578f-3827-4100-aa82-e6cf59602353-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.075247 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f8cc578f-3827-4100-aa82-e6cf59602353-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.087506 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.137862 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f8cc578f-3827-4100-aa82-e6cf59602353-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.141918 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvdwj\" (UniqueName: \"kubernetes.io/projected/f8cc578f-3827-4100-aa82-e6cf59602353-kube-api-access-tvdwj\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.371027 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f8cc578f-3827-4100-aa82-e6cf59602353\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.380147 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-k7dnr"] Jan 31 05:05:17 crc kubenswrapper[4832]: W0131 05:05:17.381277 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7a4722a_cd34_4ea4_b71e_0d51d6da13bb.slice/crio-f1ed6e97890c63fca316b077a7c7744095a95159d42817b88353e0c5efb36ea0 WatchSource:0}: Error finding container f1ed6e97890c63fca316b077a7c7744095a95159d42817b88353e0c5efb36ea0: Status 404 returned error can't find the container with id f1ed6e97890c63fca316b077a7c7744095a95159d42817b88353e0c5efb36ea0 Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.550698 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.702486 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-k7dnr" event={"ID":"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb","Type":"ContainerStarted","Data":"f1ed6e97890c63fca316b077a7c7744095a95159d42817b88353e0c5efb36ea0"} Jan 31 05:05:17 crc kubenswrapper[4832]: I0131 05:05:17.872989 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4e0df3a-5b8c-43ad-b404-5a9716f774a6" path="/var/lib/kubelet/pods/e4e0df3a-5b8c-43ad-b404-5a9716f774a6/volumes" Jan 31 05:05:18 crc kubenswrapper[4832]: W0131 05:05:18.050214 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8cc578f_3827_4100_aa82_e6cf59602353.slice/crio-0a227ac91ada3d282da7a9b81f711db519f8d4903ab12f18fcda265d24f249a2 WatchSource:0}: Error finding container 0a227ac91ada3d282da7a9b81f711db519f8d4903ab12f18fcda265d24f249a2: Status 404 returned error can't find the container with id 0a227ac91ada3d282da7a9b81f711db519f8d4903ab12f18fcda265d24f249a2 Jan 31 05:05:18 crc kubenswrapper[4832]: I0131 05:05:18.053788 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 05:05:18 crc kubenswrapper[4832]: I0131 05:05:18.540828 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:05:18 crc kubenswrapper[4832]: I0131 05:05:18.540900 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:05:18 crc kubenswrapper[4832]: I0131 05:05:18.540954 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 05:05:18 crc kubenswrapper[4832]: I0131 05:05:18.541726 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"544cd44b27a0e014964ac1b0ff9bce944334e127b15e2971994bb94547d5341e"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:05:18 crc kubenswrapper[4832]: I0131 05:05:18.541797 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://544cd44b27a0e014964ac1b0ff9bce944334e127b15e2971994bb94547d5341e" gracePeriod=600 Jan 31 05:05:18 crc kubenswrapper[4832]: I0131 05:05:18.721417 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f59551b3-d149-4bf1-90e2-428e0615f1ce","Type":"ContainerStarted","Data":"a3538eda023a90e405c50c23cddaafe38c8d41501bdffb702cebd924a03820bb"} Jan 31 05:05:18 crc kubenswrapper[4832]: I0131 05:05:18.725165 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="544cd44b27a0e014964ac1b0ff9bce944334e127b15e2971994bb94547d5341e" exitCode=0 Jan 31 05:05:18 crc kubenswrapper[4832]: I0131 05:05:18.725315 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"544cd44b27a0e014964ac1b0ff9bce944334e127b15e2971994bb94547d5341e"} Jan 31 05:05:18 crc kubenswrapper[4832]: I0131 05:05:18.725407 4832 scope.go:117] "RemoveContainer" containerID="5f27820a852e7aad47dd943e170fd10884dd63d7a7b2bc83ff12b5f3f39f5de0" Jan 31 05:05:18 crc kubenswrapper[4832]: I0131 05:05:18.729504 4832 generic.go:334] "Generic (PLEG): container finished" podID="d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" containerID="fe6a5b401c0dd3ad39eb2034d5392def2ff436a308c328f77009ed8cc3c50584" exitCode=0 Jan 31 05:05:18 crc kubenswrapper[4832]: I0131 05:05:18.729650 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-k7dnr" event={"ID":"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb","Type":"ContainerDied","Data":"fe6a5b401c0dd3ad39eb2034d5392def2ff436a308c328f77009ed8cc3c50584"} Jan 31 05:05:18 crc kubenswrapper[4832]: I0131 05:05:18.737744 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f8cc578f-3827-4100-aa82-e6cf59602353","Type":"ContainerStarted","Data":"0a227ac91ada3d282da7a9b81f711db519f8d4903ab12f18fcda265d24f249a2"} Jan 31 05:05:19 crc kubenswrapper[4832]: I0131 05:05:19.753899 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90"} Jan 31 05:05:19 crc kubenswrapper[4832]: I0131 05:05:19.759303 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-k7dnr" event={"ID":"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb","Type":"ContainerStarted","Data":"8840ba247905345967e0d4ec6c290b95ce7a49381e36e77ce9ceded87a9cf61f"} Jan 31 05:05:19 crc kubenswrapper[4832]: I0131 05:05:19.818745 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-k7dnr" podStartSLOduration=3.818705468 podStartE2EDuration="3.818705468s" podCreationTimestamp="2026-01-31 05:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:05:19.797586691 +0000 UTC m=+1328.746408416" watchObservedRunningTime="2026-01-31 05:05:19.818705468 +0000 UTC m=+1328.767527193" Jan 31 05:05:20 crc kubenswrapper[4832]: I0131 05:05:20.780968 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f8cc578f-3827-4100-aa82-e6cf59602353","Type":"ContainerStarted","Data":"381c8e33fac0bc5a6701820625964e8be9e35bb4e326c0eabf12aa3fb42b91a3"} Jan 31 05:05:20 crc kubenswrapper[4832]: I0131 05:05:20.781997 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.088761 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.179744 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-gdwbp"] Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.180643 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" podUID="17da1585-a60a-4893-bd3d-2d76fd4ca5a1" containerName="dnsmasq-dns" containerID="cri-o://2a3cbbfe8194c045fa4f37f0a1d5baae529d3f87b5bb9ed17c41cfd4e8376159" gracePeriod=10 Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.266662 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" podUID="17da1585-a60a-4893-bd3d-2d76fd4ca5a1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.204:5353: connect: connection refused" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.373981 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-c5l4d"] Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.380926 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.425700 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-c5l4d"] Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.537770 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-config\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.537832 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.538190 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.538474 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d2mw\" (UniqueName: \"kubernetes.io/projected/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-kube-api-access-2d2mw\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.538630 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.538772 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.538881 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.640991 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d2mw\" (UniqueName: \"kubernetes.io/projected/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-kube-api-access-2d2mw\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.641351 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.641402 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.641438 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.641462 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-config\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.641483 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.641573 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.642525 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.643408 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.644409 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.645301 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.646970 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-config\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.648030 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.663825 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d2mw\" (UniqueName: \"kubernetes.io/projected/b32a39cb-1499-49b0-8407-b2bfd9c3abbb-kube-api-access-2d2mw\") pod \"dnsmasq-dns-78c64bc9c5-c5l4d\" (UID: \"b32a39cb-1499-49b0-8407-b2bfd9c3abbb\") " pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.707147 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.834952 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.888587 4832 generic.go:334] "Generic (PLEG): container finished" podID="17da1585-a60a-4893-bd3d-2d76fd4ca5a1" containerID="2a3cbbfe8194c045fa4f37f0a1d5baae529d3f87b5bb9ed17c41cfd4e8376159" exitCode=0 Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.888708 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.892784 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" event={"ID":"17da1585-a60a-4893-bd3d-2d76fd4ca5a1","Type":"ContainerDied","Data":"2a3cbbfe8194c045fa4f37f0a1d5baae529d3f87b5bb9ed17c41cfd4e8376159"} Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.892852 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-gdwbp" event={"ID":"17da1585-a60a-4893-bd3d-2d76fd4ca5a1","Type":"ContainerDied","Data":"c3c40353435f45f4d35d3df405bda093c22082082f6dab31981dd53700e4cc61"} Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.892887 4832 scope.go:117] "RemoveContainer" containerID="2a3cbbfe8194c045fa4f37f0a1d5baae529d3f87b5bb9ed17c41cfd4e8376159" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.935589 4832 scope.go:117] "RemoveContainer" containerID="91c3483332b00774be435e4261304f81293e761bb7a585e6d0966dc27764e336" Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.947747 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-ovsdbserver-sb\") pod \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.947972 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-dns-svc\") pod \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.948102 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-452cd\" (UniqueName: \"kubernetes.io/projected/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-kube-api-access-452cd\") pod \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.948209 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-config\") pod \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.948303 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-ovsdbserver-nb\") pod \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.948378 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-dns-swift-storage-0\") pod \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\" (UID: \"17da1585-a60a-4893-bd3d-2d76fd4ca5a1\") " Jan 31 05:05:27 crc kubenswrapper[4832]: I0131 05:05:27.957457 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-kube-api-access-452cd" (OuterVolumeSpecName: "kube-api-access-452cd") pod "17da1585-a60a-4893-bd3d-2d76fd4ca5a1" (UID: "17da1585-a60a-4893-bd3d-2d76fd4ca5a1"). InnerVolumeSpecName "kube-api-access-452cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.024051 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "17da1585-a60a-4893-bd3d-2d76fd4ca5a1" (UID: "17da1585-a60a-4893-bd3d-2d76fd4ca5a1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.025138 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "17da1585-a60a-4893-bd3d-2d76fd4ca5a1" (UID: "17da1585-a60a-4893-bd3d-2d76fd4ca5a1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.038451 4832 scope.go:117] "RemoveContainer" containerID="2a3cbbfe8194c045fa4f37f0a1d5baae529d3f87b5bb9ed17c41cfd4e8376159" Jan 31 05:05:28 crc kubenswrapper[4832]: E0131 05:05:28.039003 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a3cbbfe8194c045fa4f37f0a1d5baae529d3f87b5bb9ed17c41cfd4e8376159\": container with ID starting with 2a3cbbfe8194c045fa4f37f0a1d5baae529d3f87b5bb9ed17c41cfd4e8376159 not found: ID does not exist" containerID="2a3cbbfe8194c045fa4f37f0a1d5baae529d3f87b5bb9ed17c41cfd4e8376159" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.039045 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3cbbfe8194c045fa4f37f0a1d5baae529d3f87b5bb9ed17c41cfd4e8376159"} err="failed to get container status \"2a3cbbfe8194c045fa4f37f0a1d5baae529d3f87b5bb9ed17c41cfd4e8376159\": rpc error: code = NotFound desc = could not find container \"2a3cbbfe8194c045fa4f37f0a1d5baae529d3f87b5bb9ed17c41cfd4e8376159\": container with ID starting with 2a3cbbfe8194c045fa4f37f0a1d5baae529d3f87b5bb9ed17c41cfd4e8376159 not found: ID does not exist" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.039072 4832 scope.go:117] "RemoveContainer" containerID="91c3483332b00774be435e4261304f81293e761bb7a585e6d0966dc27764e336" Jan 31 05:05:28 crc kubenswrapper[4832]: E0131 05:05:28.039323 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91c3483332b00774be435e4261304f81293e761bb7a585e6d0966dc27764e336\": container with ID starting with 91c3483332b00774be435e4261304f81293e761bb7a585e6d0966dc27764e336 not found: ID does not exist" containerID="91c3483332b00774be435e4261304f81293e761bb7a585e6d0966dc27764e336" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.039339 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91c3483332b00774be435e4261304f81293e761bb7a585e6d0966dc27764e336"} err="failed to get container status \"91c3483332b00774be435e4261304f81293e761bb7a585e6d0966dc27764e336\": rpc error: code = NotFound desc = could not find container \"91c3483332b00774be435e4261304f81293e761bb7a585e6d0966dc27764e336\": container with ID starting with 91c3483332b00774be435e4261304f81293e761bb7a585e6d0966dc27764e336 not found: ID does not exist" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.042791 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-config" (OuterVolumeSpecName: "config") pod "17da1585-a60a-4893-bd3d-2d76fd4ca5a1" (UID: "17da1585-a60a-4893-bd3d-2d76fd4ca5a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.045466 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "17da1585-a60a-4893-bd3d-2d76fd4ca5a1" (UID: "17da1585-a60a-4893-bd3d-2d76fd4ca5a1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.051952 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "17da1585-a60a-4893-bd3d-2d76fd4ca5a1" (UID: "17da1585-a60a-4893-bd3d-2d76fd4ca5a1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.052169 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.052202 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.052213 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-452cd\" (UniqueName: \"kubernetes.io/projected/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-kube-api-access-452cd\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.052224 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.052233 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.052247 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17da1585-a60a-4893-bd3d-2d76fd4ca5a1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.090034 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-c5l4d"] Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.245390 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-gdwbp"] Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.255881 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-gdwbp"] Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.907146 4832 generic.go:334] "Generic (PLEG): container finished" podID="b32a39cb-1499-49b0-8407-b2bfd9c3abbb" containerID="59c9f98ab80ad8b4de205d80b50596658f85c99666370a316473e0a98c4874cb" exitCode=0 Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.908311 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" event={"ID":"b32a39cb-1499-49b0-8407-b2bfd9c3abbb","Type":"ContainerDied","Data":"59c9f98ab80ad8b4de205d80b50596658f85c99666370a316473e0a98c4874cb"} Jan 31 05:05:28 crc kubenswrapper[4832]: I0131 05:05:28.910385 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" event={"ID":"b32a39cb-1499-49b0-8407-b2bfd9c3abbb","Type":"ContainerStarted","Data":"28377c57532ffaa4c836fe4f66fa45917e9dbe9bfce2edb5fee0ef2fb129ca84"} Jan 31 05:05:29 crc kubenswrapper[4832]: I0131 05:05:29.875466 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17da1585-a60a-4893-bd3d-2d76fd4ca5a1" path="/var/lib/kubelet/pods/17da1585-a60a-4893-bd3d-2d76fd4ca5a1/volumes" Jan 31 05:05:29 crc kubenswrapper[4832]: I0131 05:05:29.939385 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" event={"ID":"b32a39cb-1499-49b0-8407-b2bfd9c3abbb","Type":"ContainerStarted","Data":"137869c7ef49cfab5f6943f5249204c49f4723ceff50b83ce631090c4afcb80a"} Jan 31 05:05:29 crc kubenswrapper[4832]: I0131 05:05:29.939587 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:29 crc kubenswrapper[4832]: I0131 05:05:29.967220 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" podStartSLOduration=2.9671865200000003 podStartE2EDuration="2.96718652s" podCreationTimestamp="2026-01-31 05:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:05:29.959013636 +0000 UTC m=+1338.907835351" watchObservedRunningTime="2026-01-31 05:05:29.96718652 +0000 UTC m=+1338.916008225" Jan 31 05:05:37 crc kubenswrapper[4832]: I0131 05:05:37.710051 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c64bc9c5-c5l4d" Jan 31 05:05:37 crc kubenswrapper[4832]: I0131 05:05:37.823961 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-k7dnr"] Jan 31 05:05:37 crc kubenswrapper[4832]: I0131 05:05:37.824431 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-k7dnr" podUID="d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" containerName="dnsmasq-dns" containerID="cri-o://8840ba247905345967e0d4ec6c290b95ce7a49381e36e77ce9ceded87a9cf61f" gracePeriod=10 Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.072813 4832 generic.go:334] "Generic (PLEG): container finished" podID="d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" containerID="8840ba247905345967e0d4ec6c290b95ce7a49381e36e77ce9ceded87a9cf61f" exitCode=0 Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.073353 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-k7dnr" event={"ID":"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb","Type":"ContainerDied","Data":"8840ba247905345967e0d4ec6c290b95ce7a49381e36e77ce9ceded87a9cf61f"} Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.342071 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.454091 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-ovsdbserver-nb\") pod \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.454324 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-dns-swift-storage-0\") pod \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.454351 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-openstack-edpm-ipam\") pod \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.454428 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbnlz\" (UniqueName: \"kubernetes.io/projected/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-kube-api-access-hbnlz\") pod \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.454490 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-ovsdbserver-sb\") pod \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.454576 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-dns-svc\") pod \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.454636 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-config\") pod \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\" (UID: \"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb\") " Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.485203 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-kube-api-access-hbnlz" (OuterVolumeSpecName: "kube-api-access-hbnlz") pod "d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" (UID: "d7a4722a-cd34-4ea4-b71e-0d51d6da13bb"). InnerVolumeSpecName "kube-api-access-hbnlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.510755 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" (UID: "d7a4722a-cd34-4ea4-b71e-0d51d6da13bb"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.513475 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" (UID: "d7a4722a-cd34-4ea4-b71e-0d51d6da13bb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.525589 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" (UID: "d7a4722a-cd34-4ea4-b71e-0d51d6da13bb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.526641 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-config" (OuterVolumeSpecName: "config") pod "d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" (UID: "d7a4722a-cd34-4ea4-b71e-0d51d6da13bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.531329 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" (UID: "d7a4722a-cd34-4ea4-b71e-0d51d6da13bb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.537330 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" (UID: "d7a4722a-cd34-4ea4-b71e-0d51d6da13bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.558513 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.558553 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbnlz\" (UniqueName: \"kubernetes.io/projected/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-kube-api-access-hbnlz\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.558587 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.558601 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.558616 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.558631 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:38 crc kubenswrapper[4832]: I0131 05:05:38.558643 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:05:39 crc kubenswrapper[4832]: I0131 05:05:39.097285 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-k7dnr" event={"ID":"d7a4722a-cd34-4ea4-b71e-0d51d6da13bb","Type":"ContainerDied","Data":"f1ed6e97890c63fca316b077a7c7744095a95159d42817b88353e0c5efb36ea0"} Jan 31 05:05:39 crc kubenswrapper[4832]: I0131 05:05:39.097955 4832 scope.go:117] "RemoveContainer" containerID="8840ba247905345967e0d4ec6c290b95ce7a49381e36e77ce9ceded87a9cf61f" Jan 31 05:05:39 crc kubenswrapper[4832]: I0131 05:05:39.097412 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-k7dnr" Jan 31 05:05:39 crc kubenswrapper[4832]: I0131 05:05:39.154843 4832 scope.go:117] "RemoveContainer" containerID="fe6a5b401c0dd3ad39eb2034d5392def2ff436a308c328f77009ed8cc3c50584" Jan 31 05:05:39 crc kubenswrapper[4832]: I0131 05:05:39.166251 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-k7dnr"] Jan 31 05:05:39 crc kubenswrapper[4832]: I0131 05:05:39.175899 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-k7dnr"] Jan 31 05:05:39 crc kubenswrapper[4832]: I0131 05:05:39.881802 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" path="/var/lib/kubelet/pods/d7a4722a-cd34-4ea4-b71e-0d51d6da13bb/volumes" Jan 31 05:05:50 crc kubenswrapper[4832]: I0131 05:05:50.984988 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7"] Jan 31 05:05:50 crc kubenswrapper[4832]: E0131 05:05:50.986516 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" containerName="dnsmasq-dns" Jan 31 05:05:50 crc kubenswrapper[4832]: I0131 05:05:50.987288 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" containerName="dnsmasq-dns" Jan 31 05:05:50 crc kubenswrapper[4832]: E0131 05:05:50.987310 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17da1585-a60a-4893-bd3d-2d76fd4ca5a1" containerName="dnsmasq-dns" Jan 31 05:05:50 crc kubenswrapper[4832]: I0131 05:05:50.987322 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="17da1585-a60a-4893-bd3d-2d76fd4ca5a1" containerName="dnsmasq-dns" Jan 31 05:05:50 crc kubenswrapper[4832]: E0131 05:05:50.987343 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17da1585-a60a-4893-bd3d-2d76fd4ca5a1" containerName="init" Jan 31 05:05:50 crc kubenswrapper[4832]: I0131 05:05:50.987353 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="17da1585-a60a-4893-bd3d-2d76fd4ca5a1" containerName="init" Jan 31 05:05:50 crc kubenswrapper[4832]: E0131 05:05:50.987398 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" containerName="init" Jan 31 05:05:50 crc kubenswrapper[4832]: I0131 05:05:50.987406 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" containerName="init" Jan 31 05:05:50 crc kubenswrapper[4832]: I0131 05:05:50.990386 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="17da1585-a60a-4893-bd3d-2d76fd4ca5a1" containerName="dnsmasq-dns" Jan 31 05:05:50 crc kubenswrapper[4832]: I0131 05:05:50.990431 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a4722a-cd34-4ea4-b71e-0d51d6da13bb" containerName="dnsmasq-dns" Jan 31 05:05:50 crc kubenswrapper[4832]: I0131 05:05:50.991342 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:05:50 crc kubenswrapper[4832]: I0131 05:05:50.994701 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:05:50 crc kubenswrapper[4832]: I0131 05:05:50.994723 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:05:50 crc kubenswrapper[4832]: I0131 05:05:50.994798 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:05:50 crc kubenswrapper[4832]: I0131 05:05:50.997264 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.006596 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7"] Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.068596 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.068691 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.068771 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8rjp\" (UniqueName: \"kubernetes.io/projected/96b10887-6c77-4792-ae1d-87209c13b9fc-kube-api-access-s8rjp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.068869 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.171081 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.171214 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.171322 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.171423 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8rjp\" (UniqueName: \"kubernetes.io/projected/96b10887-6c77-4792-ae1d-87209c13b9fc-kube-api-access-s8rjp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.179070 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.180436 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.181048 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.200290 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8rjp\" (UniqueName: \"kubernetes.io/projected/96b10887-6c77-4792-ae1d-87209c13b9fc-kube-api-access-s8rjp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.259146 4832 generic.go:334] "Generic (PLEG): container finished" podID="f59551b3-d149-4bf1-90e2-428e0615f1ce" containerID="a3538eda023a90e405c50c23cddaafe38c8d41501bdffb702cebd924a03820bb" exitCode=0 Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.259215 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f59551b3-d149-4bf1-90e2-428e0615f1ce","Type":"ContainerDied","Data":"a3538eda023a90e405c50c23cddaafe38c8d41501bdffb702cebd924a03820bb"} Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.335450 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:05:51 crc kubenswrapper[4832]: I0131 05:05:51.998460 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7"] Jan 31 05:05:52 crc kubenswrapper[4832]: W0131 05:05:52.008741 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96b10887_6c77_4792_ae1d_87209c13b9fc.slice/crio-15776a40dca5c812c5093f0195e796efef529299cb6367353e1edba7fb09cfeb WatchSource:0}: Error finding container 15776a40dca5c812c5093f0195e796efef529299cb6367353e1edba7fb09cfeb: Status 404 returned error can't find the container with id 15776a40dca5c812c5093f0195e796efef529299cb6367353e1edba7fb09cfeb Jan 31 05:05:52 crc kubenswrapper[4832]: I0131 05:05:52.014069 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:05:52 crc kubenswrapper[4832]: I0131 05:05:52.270881 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" event={"ID":"96b10887-6c77-4792-ae1d-87209c13b9fc","Type":"ContainerStarted","Data":"15776a40dca5c812c5093f0195e796efef529299cb6367353e1edba7fb09cfeb"} Jan 31 05:05:52 crc kubenswrapper[4832]: I0131 05:05:52.273400 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f59551b3-d149-4bf1-90e2-428e0615f1ce","Type":"ContainerStarted","Data":"2f48c1f87ae5d52521079a0aa4932af1b9ae88f9c8fc713a9faf08891151ff5a"} Jan 31 05:05:52 crc kubenswrapper[4832]: I0131 05:05:52.273743 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 05:05:52 crc kubenswrapper[4832]: I0131 05:05:52.299330 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.299300913 podStartE2EDuration="38.299300913s" podCreationTimestamp="2026-01-31 05:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:05:52.297235569 +0000 UTC m=+1361.246057264" watchObservedRunningTime="2026-01-31 05:05:52.299300913 +0000 UTC m=+1361.248122598" Jan 31 05:05:54 crc kubenswrapper[4832]: I0131 05:05:54.317834 4832 generic.go:334] "Generic (PLEG): container finished" podID="f8cc578f-3827-4100-aa82-e6cf59602353" containerID="381c8e33fac0bc5a6701820625964e8be9e35bb4e326c0eabf12aa3fb42b91a3" exitCode=0 Jan 31 05:05:54 crc kubenswrapper[4832]: I0131 05:05:54.318053 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f8cc578f-3827-4100-aa82-e6cf59602353","Type":"ContainerDied","Data":"381c8e33fac0bc5a6701820625964e8be9e35bb4e326c0eabf12aa3fb42b91a3"} Jan 31 05:05:55 crc kubenswrapper[4832]: I0131 05:05:55.334007 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f8cc578f-3827-4100-aa82-e6cf59602353","Type":"ContainerStarted","Data":"5ea5aff7c41cbb0a31f4362617bf28f23c0c08727eaff956ce97f89880e2417d"} Jan 31 05:05:55 crc kubenswrapper[4832]: I0131 05:05:55.334867 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:05:55 crc kubenswrapper[4832]: I0131 05:05:55.372366 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.372341487 podStartE2EDuration="39.372341487s" podCreationTimestamp="2026-01-31 05:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:05:55.363823913 +0000 UTC m=+1364.312645608" watchObservedRunningTime="2026-01-31 05:05:55.372341487 +0000 UTC m=+1364.321163172" Jan 31 05:06:05 crc kubenswrapper[4832]: I0131 05:06:05.373434 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 05:06:05 crc kubenswrapper[4832]: I0131 05:06:05.698148 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" event={"ID":"96b10887-6c77-4792-ae1d-87209c13b9fc","Type":"ContainerStarted","Data":"0802e315f6d516bc8b875b13998aac54c48c30a2dfb45808ca5e04027118e713"} Jan 31 05:06:05 crc kubenswrapper[4832]: I0131 05:06:05.721760 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" podStartSLOduration=3.103882674 podStartE2EDuration="15.721736678s" podCreationTimestamp="2026-01-31 05:05:50 +0000 UTC" firstStartedPulling="2026-01-31 05:05:52.013802625 +0000 UTC m=+1360.962624300" lastFinishedPulling="2026-01-31 05:06:04.631656619 +0000 UTC m=+1373.580478304" observedRunningTime="2026-01-31 05:06:05.71154394 +0000 UTC m=+1374.660365625" watchObservedRunningTime="2026-01-31 05:06:05.721736678 +0000 UTC m=+1374.670558363" Jan 31 05:06:07 crc kubenswrapper[4832]: I0131 05:06:07.553782 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 05:06:18 crc kubenswrapper[4832]: I0131 05:06:18.847302 4832 generic.go:334] "Generic (PLEG): container finished" podID="96b10887-6c77-4792-ae1d-87209c13b9fc" containerID="0802e315f6d516bc8b875b13998aac54c48c30a2dfb45808ca5e04027118e713" exitCode=0 Jan 31 05:06:18 crc kubenswrapper[4832]: I0131 05:06:18.847418 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" event={"ID":"96b10887-6c77-4792-ae1d-87209c13b9fc","Type":"ContainerDied","Data":"0802e315f6d516bc8b875b13998aac54c48c30a2dfb45808ca5e04027118e713"} Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.376151 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.446722 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-inventory\") pod \"96b10887-6c77-4792-ae1d-87209c13b9fc\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.446798 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8rjp\" (UniqueName: \"kubernetes.io/projected/96b10887-6c77-4792-ae1d-87209c13b9fc-kube-api-access-s8rjp\") pod \"96b10887-6c77-4792-ae1d-87209c13b9fc\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.447047 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-repo-setup-combined-ca-bundle\") pod \"96b10887-6c77-4792-ae1d-87209c13b9fc\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.447298 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-ssh-key-openstack-edpm-ipam\") pod \"96b10887-6c77-4792-ae1d-87209c13b9fc\" (UID: \"96b10887-6c77-4792-ae1d-87209c13b9fc\") " Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.455833 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "96b10887-6c77-4792-ae1d-87209c13b9fc" (UID: "96b10887-6c77-4792-ae1d-87209c13b9fc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.464302 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b10887-6c77-4792-ae1d-87209c13b9fc-kube-api-access-s8rjp" (OuterVolumeSpecName: "kube-api-access-s8rjp") pod "96b10887-6c77-4792-ae1d-87209c13b9fc" (UID: "96b10887-6c77-4792-ae1d-87209c13b9fc"). InnerVolumeSpecName "kube-api-access-s8rjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.479285 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96b10887-6c77-4792-ae1d-87209c13b9fc" (UID: "96b10887-6c77-4792-ae1d-87209c13b9fc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.484904 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-inventory" (OuterVolumeSpecName: "inventory") pod "96b10887-6c77-4792-ae1d-87209c13b9fc" (UID: "96b10887-6c77-4792-ae1d-87209c13b9fc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.550166 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.550220 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.550231 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8rjp\" (UniqueName: \"kubernetes.io/projected/96b10887-6c77-4792-ae1d-87209c13b9fc-kube-api-access-s8rjp\") on node \"crc\" DevicePath \"\"" Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.550241 4832 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b10887-6c77-4792-ae1d-87209c13b9fc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.879730 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" event={"ID":"96b10887-6c77-4792-ae1d-87209c13b9fc","Type":"ContainerDied","Data":"15776a40dca5c812c5093f0195e796efef529299cb6367353e1edba7fb09cfeb"} Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.879769 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15776a40dca5c812c5093f0195e796efef529299cb6367353e1edba7fb09cfeb" Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.879816 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7" Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.974522 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx"] Jan 31 05:06:20 crc kubenswrapper[4832]: E0131 05:06:20.976035 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b10887-6c77-4792-ae1d-87209c13b9fc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.976086 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b10887-6c77-4792-ae1d-87209c13b9fc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 05:06:20 crc kubenswrapper[4832]: I0131 05:06:20.977962 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b10887-6c77-4792-ae1d-87209c13b9fc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.012757 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.016765 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.016819 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.017041 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.017260 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.022351 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx"] Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.060734 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzcmz\" (UniqueName: \"kubernetes.io/projected/46cb5cd9-ca77-4c57-9d83-b4ef015da993-kube-api-access-mzcmz\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l9vkx\" (UID: \"46cb5cd9-ca77-4c57-9d83-b4ef015da993\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.060967 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46cb5cd9-ca77-4c57-9d83-b4ef015da993-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l9vkx\" (UID: \"46cb5cd9-ca77-4c57-9d83-b4ef015da993\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.061103 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46cb5cd9-ca77-4c57-9d83-b4ef015da993-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l9vkx\" (UID: \"46cb5cd9-ca77-4c57-9d83-b4ef015da993\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.179707 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzcmz\" (UniqueName: \"kubernetes.io/projected/46cb5cd9-ca77-4c57-9d83-b4ef015da993-kube-api-access-mzcmz\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l9vkx\" (UID: \"46cb5cd9-ca77-4c57-9d83-b4ef015da993\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.179811 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46cb5cd9-ca77-4c57-9d83-b4ef015da993-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l9vkx\" (UID: \"46cb5cd9-ca77-4c57-9d83-b4ef015da993\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.179865 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46cb5cd9-ca77-4c57-9d83-b4ef015da993-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l9vkx\" (UID: \"46cb5cd9-ca77-4c57-9d83-b4ef015da993\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.188528 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46cb5cd9-ca77-4c57-9d83-b4ef015da993-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l9vkx\" (UID: \"46cb5cd9-ca77-4c57-9d83-b4ef015da993\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.190708 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46cb5cd9-ca77-4c57-9d83-b4ef015da993-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l9vkx\" (UID: \"46cb5cd9-ca77-4c57-9d83-b4ef015da993\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.203468 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzcmz\" (UniqueName: \"kubernetes.io/projected/46cb5cd9-ca77-4c57-9d83-b4ef015da993-kube-api-access-mzcmz\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l9vkx\" (UID: \"46cb5cd9-ca77-4c57-9d83-b4ef015da993\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.337039 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" Jan 31 05:06:21 crc kubenswrapper[4832]: I0131 05:06:21.939742 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx"] Jan 31 05:06:22 crc kubenswrapper[4832]: I0131 05:06:22.900298 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" event={"ID":"46cb5cd9-ca77-4c57-9d83-b4ef015da993","Type":"ContainerStarted","Data":"dd9743ab90188322f7c46309b777936cd7d7c9e31706ae11088ff7156f3678ef"} Jan 31 05:06:22 crc kubenswrapper[4832]: I0131 05:06:22.900716 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" event={"ID":"46cb5cd9-ca77-4c57-9d83-b4ef015da993","Type":"ContainerStarted","Data":"67a6e2dba6311d6abb77e1c0f35270c66cbe464d9055813a9c1e2329b5bbff77"} Jan 31 05:06:22 crc kubenswrapper[4832]: I0131 05:06:22.921608 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" podStartSLOduration=2.474504317 podStartE2EDuration="2.9215868s" podCreationTimestamp="2026-01-31 05:06:20 +0000 UTC" firstStartedPulling="2026-01-31 05:06:21.941304486 +0000 UTC m=+1390.890126171" lastFinishedPulling="2026-01-31 05:06:22.388386969 +0000 UTC m=+1391.337208654" observedRunningTime="2026-01-31 05:06:22.916132641 +0000 UTC m=+1391.864954376" watchObservedRunningTime="2026-01-31 05:06:22.9215868 +0000 UTC m=+1391.870408485" Jan 31 05:06:25 crc kubenswrapper[4832]: I0131 05:06:25.932913 4832 generic.go:334] "Generic (PLEG): container finished" podID="46cb5cd9-ca77-4c57-9d83-b4ef015da993" containerID="dd9743ab90188322f7c46309b777936cd7d7c9e31706ae11088ff7156f3678ef" exitCode=0 Jan 31 05:06:25 crc kubenswrapper[4832]: I0131 05:06:25.933079 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" event={"ID":"46cb5cd9-ca77-4c57-9d83-b4ef015da993","Type":"ContainerDied","Data":"dd9743ab90188322f7c46309b777936cd7d7c9e31706ae11088ff7156f3678ef"} Jan 31 05:06:27 crc kubenswrapper[4832]: I0131 05:06:27.465770 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" Jan 31 05:06:27 crc kubenswrapper[4832]: I0131 05:06:27.658841 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzcmz\" (UniqueName: \"kubernetes.io/projected/46cb5cd9-ca77-4c57-9d83-b4ef015da993-kube-api-access-mzcmz\") pod \"46cb5cd9-ca77-4c57-9d83-b4ef015da993\" (UID: \"46cb5cd9-ca77-4c57-9d83-b4ef015da993\") " Jan 31 05:06:27 crc kubenswrapper[4832]: I0131 05:06:27.659053 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46cb5cd9-ca77-4c57-9d83-b4ef015da993-ssh-key-openstack-edpm-ipam\") pod \"46cb5cd9-ca77-4c57-9d83-b4ef015da993\" (UID: \"46cb5cd9-ca77-4c57-9d83-b4ef015da993\") " Jan 31 05:06:27 crc kubenswrapper[4832]: I0131 05:06:27.659309 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46cb5cd9-ca77-4c57-9d83-b4ef015da993-inventory\") pod \"46cb5cd9-ca77-4c57-9d83-b4ef015da993\" (UID: \"46cb5cd9-ca77-4c57-9d83-b4ef015da993\") " Jan 31 05:06:27 crc kubenswrapper[4832]: I0131 05:06:27.670594 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46cb5cd9-ca77-4c57-9d83-b4ef015da993-kube-api-access-mzcmz" (OuterVolumeSpecName: "kube-api-access-mzcmz") pod "46cb5cd9-ca77-4c57-9d83-b4ef015da993" (UID: "46cb5cd9-ca77-4c57-9d83-b4ef015da993"). InnerVolumeSpecName "kube-api-access-mzcmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:06:27 crc kubenswrapper[4832]: I0131 05:06:27.692221 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46cb5cd9-ca77-4c57-9d83-b4ef015da993-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "46cb5cd9-ca77-4c57-9d83-b4ef015da993" (UID: "46cb5cd9-ca77-4c57-9d83-b4ef015da993"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:06:27 crc kubenswrapper[4832]: I0131 05:06:27.705339 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46cb5cd9-ca77-4c57-9d83-b4ef015da993-inventory" (OuterVolumeSpecName: "inventory") pod "46cb5cd9-ca77-4c57-9d83-b4ef015da993" (UID: "46cb5cd9-ca77-4c57-9d83-b4ef015da993"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:06:27 crc kubenswrapper[4832]: I0131 05:06:27.762328 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46cb5cd9-ca77-4c57-9d83-b4ef015da993-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:06:27 crc kubenswrapper[4832]: I0131 05:06:27.762377 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46cb5cd9-ca77-4c57-9d83-b4ef015da993-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:06:27 crc kubenswrapper[4832]: I0131 05:06:27.762394 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzcmz\" (UniqueName: \"kubernetes.io/projected/46cb5cd9-ca77-4c57-9d83-b4ef015da993-kube-api-access-mzcmz\") on node \"crc\" DevicePath \"\"" Jan 31 05:06:27 crc kubenswrapper[4832]: I0131 05:06:27.969930 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" event={"ID":"46cb5cd9-ca77-4c57-9d83-b4ef015da993","Type":"ContainerDied","Data":"67a6e2dba6311d6abb77e1c0f35270c66cbe464d9055813a9c1e2329b5bbff77"} Jan 31 05:06:27 crc kubenswrapper[4832]: I0131 05:06:27.969978 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67a6e2dba6311d6abb77e1c0f35270c66cbe464d9055813a9c1e2329b5bbff77" Jan 31 05:06:27 crc kubenswrapper[4832]: I0131 05:06:27.970038 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l9vkx" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.044209 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw"] Jan 31 05:06:28 crc kubenswrapper[4832]: E0131 05:06:28.045304 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46cb5cd9-ca77-4c57-9d83-b4ef015da993" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.045327 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="46cb5cd9-ca77-4c57-9d83-b4ef015da993" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.045522 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="46cb5cd9-ca77-4c57-9d83-b4ef015da993" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.046278 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.049664 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.049809 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.049936 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.050256 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.062748 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw"] Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.171006 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.171382 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.171488 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbhg5\" (UniqueName: \"kubernetes.io/projected/27dc3183-5db8-4c94-8247-f5af07376737-kube-api-access-vbhg5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.172036 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.273853 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.273935 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbhg5\" (UniqueName: \"kubernetes.io/projected/27dc3183-5db8-4c94-8247-f5af07376737-kube-api-access-vbhg5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.274060 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.274135 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.280065 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.280993 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.284662 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.296103 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbhg5\" (UniqueName: \"kubernetes.io/projected/27dc3183-5db8-4c94-8247-f5af07376737-kube-api-access-vbhg5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.364172 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.959625 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw"] Jan 31 05:06:28 crc kubenswrapper[4832]: I0131 05:06:28.985175 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" event={"ID":"27dc3183-5db8-4c94-8247-f5af07376737","Type":"ContainerStarted","Data":"c00a74ebe5c111d936c7886e2a3db17237b0409bf639c88e7b6468a8dd691951"} Jan 31 05:06:30 crc kubenswrapper[4832]: I0131 05:06:30.000363 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" event={"ID":"27dc3183-5db8-4c94-8247-f5af07376737","Type":"ContainerStarted","Data":"48765ddacec3b15e92438d802ce8d255e6ae35182df2af00bb8a107fef08778f"} Jan 31 05:06:30 crc kubenswrapper[4832]: I0131 05:06:30.032693 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" podStartSLOduration=1.482489467 podStartE2EDuration="2.032663487s" podCreationTimestamp="2026-01-31 05:06:28 +0000 UTC" firstStartedPulling="2026-01-31 05:06:28.973942523 +0000 UTC m=+1397.922764208" lastFinishedPulling="2026-01-31 05:06:29.524116513 +0000 UTC m=+1398.472938228" observedRunningTime="2026-01-31 05:06:30.022370707 +0000 UTC m=+1398.971192422" watchObservedRunningTime="2026-01-31 05:06:30.032663487 +0000 UTC m=+1398.981485182" Jan 31 05:07:03 crc kubenswrapper[4832]: I0131 05:07:03.616437 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-58wmx"] Jan 31 05:07:03 crc kubenswrapper[4832]: I0131 05:07:03.620158 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:03 crc kubenswrapper[4832]: I0131 05:07:03.638253 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58wmx"] Jan 31 05:07:03 crc kubenswrapper[4832]: I0131 05:07:03.801910 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6ngg\" (UniqueName: \"kubernetes.io/projected/31c74244-85a2-4e69-bc71-8db482c364ab-kube-api-access-r6ngg\") pod \"certified-operators-58wmx\" (UID: \"31c74244-85a2-4e69-bc71-8db482c364ab\") " pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:03 crc kubenswrapper[4832]: I0131 05:07:03.802065 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c74244-85a2-4e69-bc71-8db482c364ab-utilities\") pod \"certified-operators-58wmx\" (UID: \"31c74244-85a2-4e69-bc71-8db482c364ab\") " pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:03 crc kubenswrapper[4832]: I0131 05:07:03.802304 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c74244-85a2-4e69-bc71-8db482c364ab-catalog-content\") pod \"certified-operators-58wmx\" (UID: \"31c74244-85a2-4e69-bc71-8db482c364ab\") " pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:03 crc kubenswrapper[4832]: I0131 05:07:03.905310 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c74244-85a2-4e69-bc71-8db482c364ab-catalog-content\") pod \"certified-operators-58wmx\" (UID: \"31c74244-85a2-4e69-bc71-8db482c364ab\") " pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:03 crc kubenswrapper[4832]: I0131 05:07:03.905371 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6ngg\" (UniqueName: \"kubernetes.io/projected/31c74244-85a2-4e69-bc71-8db482c364ab-kube-api-access-r6ngg\") pod \"certified-operators-58wmx\" (UID: \"31c74244-85a2-4e69-bc71-8db482c364ab\") " pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:03 crc kubenswrapper[4832]: I0131 05:07:03.905434 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c74244-85a2-4e69-bc71-8db482c364ab-utilities\") pod \"certified-operators-58wmx\" (UID: \"31c74244-85a2-4e69-bc71-8db482c364ab\") " pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:03 crc kubenswrapper[4832]: I0131 05:07:03.906315 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c74244-85a2-4e69-bc71-8db482c364ab-utilities\") pod \"certified-operators-58wmx\" (UID: \"31c74244-85a2-4e69-bc71-8db482c364ab\") " pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:03 crc kubenswrapper[4832]: I0131 05:07:03.906310 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c74244-85a2-4e69-bc71-8db482c364ab-catalog-content\") pod \"certified-operators-58wmx\" (UID: \"31c74244-85a2-4e69-bc71-8db482c364ab\") " pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:03 crc kubenswrapper[4832]: I0131 05:07:03.932483 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6ngg\" (UniqueName: \"kubernetes.io/projected/31c74244-85a2-4e69-bc71-8db482c364ab-kube-api-access-r6ngg\") pod \"certified-operators-58wmx\" (UID: \"31c74244-85a2-4e69-bc71-8db482c364ab\") " pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:03 crc kubenswrapper[4832]: I0131 05:07:03.947381 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:04 crc kubenswrapper[4832]: I0131 05:07:04.499976 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58wmx"] Jan 31 05:07:05 crc kubenswrapper[4832]: I0131 05:07:05.395519 4832 generic.go:334] "Generic (PLEG): container finished" podID="31c74244-85a2-4e69-bc71-8db482c364ab" containerID="a263fb4ceab7887c4025c6e64048b8bb6d953a881729e54d1d50d4cb64d21d3d" exitCode=0 Jan 31 05:07:05 crc kubenswrapper[4832]: I0131 05:07:05.395612 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58wmx" event={"ID":"31c74244-85a2-4e69-bc71-8db482c364ab","Type":"ContainerDied","Data":"a263fb4ceab7887c4025c6e64048b8bb6d953a881729e54d1d50d4cb64d21d3d"} Jan 31 05:07:05 crc kubenswrapper[4832]: I0131 05:07:05.395919 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58wmx" event={"ID":"31c74244-85a2-4e69-bc71-8db482c364ab","Type":"ContainerStarted","Data":"5a36d774ff8bcd5b571fac6714892fefade26f551d621881908a3375137f9cc1"} Jan 31 05:07:07 crc kubenswrapper[4832]: I0131 05:07:07.426678 4832 generic.go:334] "Generic (PLEG): container finished" podID="31c74244-85a2-4e69-bc71-8db482c364ab" containerID="d6b8443e8c42099db3df962c1a322c30f51bbc384b86dd135e4add21888583e4" exitCode=0 Jan 31 05:07:07 crc kubenswrapper[4832]: I0131 05:07:07.426724 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58wmx" event={"ID":"31c74244-85a2-4e69-bc71-8db482c364ab","Type":"ContainerDied","Data":"d6b8443e8c42099db3df962c1a322c30f51bbc384b86dd135e4add21888583e4"} Jan 31 05:07:08 crc kubenswrapper[4832]: I0131 05:07:08.457098 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58wmx" event={"ID":"31c74244-85a2-4e69-bc71-8db482c364ab","Type":"ContainerStarted","Data":"ec1c11bbffb257c2d8f3bf8a4ccebddbc659ab8f58cf0f36f79f015739469a9e"} Jan 31 05:07:08 crc kubenswrapper[4832]: I0131 05:07:08.487370 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-58wmx" podStartSLOduration=3.039798458 podStartE2EDuration="5.487345191s" podCreationTimestamp="2026-01-31 05:07:03 +0000 UTC" firstStartedPulling="2026-01-31 05:07:05.398047941 +0000 UTC m=+1434.346869656" lastFinishedPulling="2026-01-31 05:07:07.845594704 +0000 UTC m=+1436.794416389" observedRunningTime="2026-01-31 05:07:08.483386507 +0000 UTC m=+1437.432208202" watchObservedRunningTime="2026-01-31 05:07:08.487345191 +0000 UTC m=+1437.436166886" Jan 31 05:07:11 crc kubenswrapper[4832]: I0131 05:07:11.528885 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2j4nt"] Jan 31 05:07:11 crc kubenswrapper[4832]: I0131 05:07:11.533721 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:11 crc kubenswrapper[4832]: I0131 05:07:11.542472 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2j4nt"] Jan 31 05:07:11 crc kubenswrapper[4832]: I0131 05:07:11.705117 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-utilities\") pod \"redhat-operators-2j4nt\" (UID: \"d68445fa-68fa-4f2e-b8cd-963a362bd6f4\") " pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:11 crc kubenswrapper[4832]: I0131 05:07:11.705286 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-catalog-content\") pod \"redhat-operators-2j4nt\" (UID: \"d68445fa-68fa-4f2e-b8cd-963a362bd6f4\") " pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:11 crc kubenswrapper[4832]: I0131 05:07:11.705401 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtplp\" (UniqueName: \"kubernetes.io/projected/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-kube-api-access-vtplp\") pod \"redhat-operators-2j4nt\" (UID: \"d68445fa-68fa-4f2e-b8cd-963a362bd6f4\") " pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:11 crc kubenswrapper[4832]: I0131 05:07:11.807038 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtplp\" (UniqueName: \"kubernetes.io/projected/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-kube-api-access-vtplp\") pod \"redhat-operators-2j4nt\" (UID: \"d68445fa-68fa-4f2e-b8cd-963a362bd6f4\") " pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:11 crc kubenswrapper[4832]: I0131 05:07:11.807145 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-utilities\") pod \"redhat-operators-2j4nt\" (UID: \"d68445fa-68fa-4f2e-b8cd-963a362bd6f4\") " pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:11 crc kubenswrapper[4832]: I0131 05:07:11.807211 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-catalog-content\") pod \"redhat-operators-2j4nt\" (UID: \"d68445fa-68fa-4f2e-b8cd-963a362bd6f4\") " pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:11 crc kubenswrapper[4832]: I0131 05:07:11.807946 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-utilities\") pod \"redhat-operators-2j4nt\" (UID: \"d68445fa-68fa-4f2e-b8cd-963a362bd6f4\") " pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:11 crc kubenswrapper[4832]: I0131 05:07:11.807982 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-catalog-content\") pod \"redhat-operators-2j4nt\" (UID: \"d68445fa-68fa-4f2e-b8cd-963a362bd6f4\") " pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:11 crc kubenswrapper[4832]: I0131 05:07:11.831902 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtplp\" (UniqueName: \"kubernetes.io/projected/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-kube-api-access-vtplp\") pod \"redhat-operators-2j4nt\" (UID: \"d68445fa-68fa-4f2e-b8cd-963a362bd6f4\") " pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:11 crc kubenswrapper[4832]: I0131 05:07:11.851848 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:12 crc kubenswrapper[4832]: W0131 05:07:12.401479 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd68445fa_68fa_4f2e_b8cd_963a362bd6f4.slice/crio-56f42c138a539f4ce389eee6f4d7dd93373165c696f5eadc549e404217ead463 WatchSource:0}: Error finding container 56f42c138a539f4ce389eee6f4d7dd93373165c696f5eadc549e404217ead463: Status 404 returned error can't find the container with id 56f42c138a539f4ce389eee6f4d7dd93373165c696f5eadc549e404217ead463 Jan 31 05:07:12 crc kubenswrapper[4832]: I0131 05:07:12.413495 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2j4nt"] Jan 31 05:07:12 crc kubenswrapper[4832]: I0131 05:07:12.502530 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2j4nt" event={"ID":"d68445fa-68fa-4f2e-b8cd-963a362bd6f4","Type":"ContainerStarted","Data":"56f42c138a539f4ce389eee6f4d7dd93373165c696f5eadc549e404217ead463"} Jan 31 05:07:13 crc kubenswrapper[4832]: I0131 05:07:13.517781 4832 generic.go:334] "Generic (PLEG): container finished" podID="d68445fa-68fa-4f2e-b8cd-963a362bd6f4" containerID="38edfde0b335473cc04d89bf3004cdcb2e14ac1fa8cfe57ca65be3d6ca47eb7e" exitCode=0 Jan 31 05:07:13 crc kubenswrapper[4832]: I0131 05:07:13.517835 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2j4nt" event={"ID":"d68445fa-68fa-4f2e-b8cd-963a362bd6f4","Type":"ContainerDied","Data":"38edfde0b335473cc04d89bf3004cdcb2e14ac1fa8cfe57ca65be3d6ca47eb7e"} Jan 31 05:07:13 crc kubenswrapper[4832]: I0131 05:07:13.940012 4832 scope.go:117] "RemoveContainer" containerID="8d6bbe390a8c57797ffc70708f7575454e63eb59cde24adee45308efba4a18de" Jan 31 05:07:13 crc kubenswrapper[4832]: I0131 05:07:13.948613 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:13 crc kubenswrapper[4832]: I0131 05:07:13.948674 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:13 crc kubenswrapper[4832]: I0131 05:07:13.971739 4832 scope.go:117] "RemoveContainer" containerID="907d52516810dd9efc0434f02fd48f0280240b66254f860221c35ac7d4bd2bc5" Jan 31 05:07:14 crc kubenswrapper[4832]: I0131 05:07:14.017689 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:14 crc kubenswrapper[4832]: I0131 05:07:14.075285 4832 scope.go:117] "RemoveContainer" containerID="fe640a3978bfad8bc6149619da785934aea9dfb14d297575b65e99a89f132127" Jan 31 05:07:14 crc kubenswrapper[4832]: I0131 05:07:14.103201 4832 scope.go:117] "RemoveContainer" containerID="eec549683e1c37d759a251067f9759dad3d26d9c9615de12f755086f9566344f" Jan 31 05:07:14 crc kubenswrapper[4832]: I0131 05:07:14.147121 4832 scope.go:117] "RemoveContainer" containerID="3f0e7b1a6323b3f1e897cd580643e2cb45b826cd414811d177c9d2e8a4776c3e" Jan 31 05:07:14 crc kubenswrapper[4832]: I0131 05:07:14.195975 4832 scope.go:117] "RemoveContainer" containerID="3d93a48b0327ff25a7fc6b95b58ea42d9562dbc528bfecd9074135b89a13df01" Jan 31 05:07:14 crc kubenswrapper[4832]: I0131 05:07:14.550205 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2j4nt" event={"ID":"d68445fa-68fa-4f2e-b8cd-963a362bd6f4","Type":"ContainerStarted","Data":"14671fcc486bd0477bf0692d0d90558270d5303bca014e1a92bf2efd2d476546"} Jan 31 05:07:14 crc kubenswrapper[4832]: I0131 05:07:14.629549 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:16 crc kubenswrapper[4832]: I0131 05:07:16.383596 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58wmx"] Jan 31 05:07:16 crc kubenswrapper[4832]: I0131 05:07:16.579157 4832 generic.go:334] "Generic (PLEG): container finished" podID="d68445fa-68fa-4f2e-b8cd-963a362bd6f4" containerID="14671fcc486bd0477bf0692d0d90558270d5303bca014e1a92bf2efd2d476546" exitCode=0 Jan 31 05:07:16 crc kubenswrapper[4832]: I0131 05:07:16.580040 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-58wmx" podUID="31c74244-85a2-4e69-bc71-8db482c364ab" containerName="registry-server" containerID="cri-o://ec1c11bbffb257c2d8f3bf8a4ccebddbc659ab8f58cf0f36f79f015739469a9e" gracePeriod=2 Jan 31 05:07:16 crc kubenswrapper[4832]: I0131 05:07:16.580875 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2j4nt" event={"ID":"d68445fa-68fa-4f2e-b8cd-963a362bd6f4","Type":"ContainerDied","Data":"14671fcc486bd0477bf0692d0d90558270d5303bca014e1a92bf2efd2d476546"} Jan 31 05:07:18 crc kubenswrapper[4832]: I0131 05:07:18.606186 4832 generic.go:334] "Generic (PLEG): container finished" podID="31c74244-85a2-4e69-bc71-8db482c364ab" containerID="ec1c11bbffb257c2d8f3bf8a4ccebddbc659ab8f58cf0f36f79f015739469a9e" exitCode=0 Jan 31 05:07:18 crc kubenswrapper[4832]: I0131 05:07:18.606453 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58wmx" event={"ID":"31c74244-85a2-4e69-bc71-8db482c364ab","Type":"ContainerDied","Data":"ec1c11bbffb257c2d8f3bf8a4ccebddbc659ab8f58cf0f36f79f015739469a9e"} Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.018162 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.200628 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6ngg\" (UniqueName: \"kubernetes.io/projected/31c74244-85a2-4e69-bc71-8db482c364ab-kube-api-access-r6ngg\") pod \"31c74244-85a2-4e69-bc71-8db482c364ab\" (UID: \"31c74244-85a2-4e69-bc71-8db482c364ab\") " Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.200782 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c74244-85a2-4e69-bc71-8db482c364ab-catalog-content\") pod \"31c74244-85a2-4e69-bc71-8db482c364ab\" (UID: \"31c74244-85a2-4e69-bc71-8db482c364ab\") " Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.200865 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c74244-85a2-4e69-bc71-8db482c364ab-utilities\") pod \"31c74244-85a2-4e69-bc71-8db482c364ab\" (UID: \"31c74244-85a2-4e69-bc71-8db482c364ab\") " Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.201937 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c74244-85a2-4e69-bc71-8db482c364ab-utilities" (OuterVolumeSpecName: "utilities") pod "31c74244-85a2-4e69-bc71-8db482c364ab" (UID: "31c74244-85a2-4e69-bc71-8db482c364ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.208272 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c74244-85a2-4e69-bc71-8db482c364ab-kube-api-access-r6ngg" (OuterVolumeSpecName: "kube-api-access-r6ngg") pod "31c74244-85a2-4e69-bc71-8db482c364ab" (UID: "31c74244-85a2-4e69-bc71-8db482c364ab"). InnerVolumeSpecName "kube-api-access-r6ngg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.250681 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c74244-85a2-4e69-bc71-8db482c364ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31c74244-85a2-4e69-bc71-8db482c364ab" (UID: "31c74244-85a2-4e69-bc71-8db482c364ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.302816 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6ngg\" (UniqueName: \"kubernetes.io/projected/31c74244-85a2-4e69-bc71-8db482c364ab-kube-api-access-r6ngg\") on node \"crc\" DevicePath \"\"" Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.302856 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c74244-85a2-4e69-bc71-8db482c364ab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.302868 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c74244-85a2-4e69-bc71-8db482c364ab-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.624599 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2j4nt" event={"ID":"d68445fa-68fa-4f2e-b8cd-963a362bd6f4","Type":"ContainerStarted","Data":"6ec25957636307c105a0fce3581580da7c739db4ed90af3d0c0f88eaf3ab3616"} Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.628842 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58wmx" event={"ID":"31c74244-85a2-4e69-bc71-8db482c364ab","Type":"ContainerDied","Data":"5a36d774ff8bcd5b571fac6714892fefade26f551d621881908a3375137f9cc1"} Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.628910 4832 scope.go:117] "RemoveContainer" containerID="ec1c11bbffb257c2d8f3bf8a4ccebddbc659ab8f58cf0f36f79f015739469a9e" Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.628924 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58wmx" Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.654496 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2j4nt" podStartSLOduration=3.623427398 podStartE2EDuration="8.654472911s" podCreationTimestamp="2026-01-31 05:07:11 +0000 UTC" firstStartedPulling="2026-01-31 05:07:13.520301433 +0000 UTC m=+1442.469123118" lastFinishedPulling="2026-01-31 05:07:18.551346916 +0000 UTC m=+1447.500168631" observedRunningTime="2026-01-31 05:07:19.651064895 +0000 UTC m=+1448.599886590" watchObservedRunningTime="2026-01-31 05:07:19.654472911 +0000 UTC m=+1448.603294596" Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.673734 4832 scope.go:117] "RemoveContainer" containerID="d6b8443e8c42099db3df962c1a322c30f51bbc384b86dd135e4add21888583e4" Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.687122 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58wmx"] Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.695760 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-58wmx"] Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.705916 4832 scope.go:117] "RemoveContainer" containerID="a263fb4ceab7887c4025c6e64048b8bb6d953a881729e54d1d50d4cb64d21d3d" Jan 31 05:07:19 crc kubenswrapper[4832]: I0131 05:07:19.877002 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c74244-85a2-4e69-bc71-8db482c364ab" path="/var/lib/kubelet/pods/31c74244-85a2-4e69-bc71-8db482c364ab/volumes" Jan 31 05:07:21 crc kubenswrapper[4832]: I0131 05:07:21.852834 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:21 crc kubenswrapper[4832]: I0131 05:07:21.854205 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:22 crc kubenswrapper[4832]: I0131 05:07:22.940368 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2j4nt" podUID="d68445fa-68fa-4f2e-b8cd-963a362bd6f4" containerName="registry-server" probeResult="failure" output=< Jan 31 05:07:22 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 31 05:07:22 crc kubenswrapper[4832]: > Jan 31 05:07:31 crc kubenswrapper[4832]: I0131 05:07:31.909861 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:31 crc kubenswrapper[4832]: I0131 05:07:31.977864 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:32 crc kubenswrapper[4832]: I0131 05:07:32.168608 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2j4nt"] Jan 31 05:07:33 crc kubenswrapper[4832]: I0131 05:07:33.818483 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2j4nt" podUID="d68445fa-68fa-4f2e-b8cd-963a362bd6f4" containerName="registry-server" containerID="cri-o://6ec25957636307c105a0fce3581580da7c739db4ed90af3d0c0f88eaf3ab3616" gracePeriod=2 Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.314272 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.380667 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtplp\" (UniqueName: \"kubernetes.io/projected/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-kube-api-access-vtplp\") pod \"d68445fa-68fa-4f2e-b8cd-963a362bd6f4\" (UID: \"d68445fa-68fa-4f2e-b8cd-963a362bd6f4\") " Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.380779 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-utilities\") pod \"d68445fa-68fa-4f2e-b8cd-963a362bd6f4\" (UID: \"d68445fa-68fa-4f2e-b8cd-963a362bd6f4\") " Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.380986 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-catalog-content\") pod \"d68445fa-68fa-4f2e-b8cd-963a362bd6f4\" (UID: \"d68445fa-68fa-4f2e-b8cd-963a362bd6f4\") " Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.381791 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-utilities" (OuterVolumeSpecName: "utilities") pod "d68445fa-68fa-4f2e-b8cd-963a362bd6f4" (UID: "d68445fa-68fa-4f2e-b8cd-963a362bd6f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.381933 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.394639 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-kube-api-access-vtplp" (OuterVolumeSpecName: "kube-api-access-vtplp") pod "d68445fa-68fa-4f2e-b8cd-963a362bd6f4" (UID: "d68445fa-68fa-4f2e-b8cd-963a362bd6f4"). InnerVolumeSpecName "kube-api-access-vtplp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.484481 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtplp\" (UniqueName: \"kubernetes.io/projected/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-kube-api-access-vtplp\") on node \"crc\" DevicePath \"\"" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.527821 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d68445fa-68fa-4f2e-b8cd-963a362bd6f4" (UID: "d68445fa-68fa-4f2e-b8cd-963a362bd6f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.586710 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68445fa-68fa-4f2e-b8cd-963a362bd6f4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.836549 4832 generic.go:334] "Generic (PLEG): container finished" podID="d68445fa-68fa-4f2e-b8cd-963a362bd6f4" containerID="6ec25957636307c105a0fce3581580da7c739db4ed90af3d0c0f88eaf3ab3616" exitCode=0 Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.836635 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2j4nt" event={"ID":"d68445fa-68fa-4f2e-b8cd-963a362bd6f4","Type":"ContainerDied","Data":"6ec25957636307c105a0fce3581580da7c739db4ed90af3d0c0f88eaf3ab3616"} Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.836693 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2j4nt" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.836813 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2j4nt" event={"ID":"d68445fa-68fa-4f2e-b8cd-963a362bd6f4","Type":"ContainerDied","Data":"56f42c138a539f4ce389eee6f4d7dd93373165c696f5eadc549e404217ead463"} Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.836861 4832 scope.go:117] "RemoveContainer" containerID="6ec25957636307c105a0fce3581580da7c739db4ed90af3d0c0f88eaf3ab3616" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.877608 4832 scope.go:117] "RemoveContainer" containerID="14671fcc486bd0477bf0692d0d90558270d5303bca014e1a92bf2efd2d476546" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.889612 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2j4nt"] Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.906608 4832 scope.go:117] "RemoveContainer" containerID="38edfde0b335473cc04d89bf3004cdcb2e14ac1fa8cfe57ca65be3d6ca47eb7e" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.910862 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2j4nt"] Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.965819 4832 scope.go:117] "RemoveContainer" containerID="6ec25957636307c105a0fce3581580da7c739db4ed90af3d0c0f88eaf3ab3616" Jan 31 05:07:34 crc kubenswrapper[4832]: E0131 05:07:34.966528 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ec25957636307c105a0fce3581580da7c739db4ed90af3d0c0f88eaf3ab3616\": container with ID starting with 6ec25957636307c105a0fce3581580da7c739db4ed90af3d0c0f88eaf3ab3616 not found: ID does not exist" containerID="6ec25957636307c105a0fce3581580da7c739db4ed90af3d0c0f88eaf3ab3616" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.966598 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec25957636307c105a0fce3581580da7c739db4ed90af3d0c0f88eaf3ab3616"} err="failed to get container status \"6ec25957636307c105a0fce3581580da7c739db4ed90af3d0c0f88eaf3ab3616\": rpc error: code = NotFound desc = could not find container \"6ec25957636307c105a0fce3581580da7c739db4ed90af3d0c0f88eaf3ab3616\": container with ID starting with 6ec25957636307c105a0fce3581580da7c739db4ed90af3d0c0f88eaf3ab3616 not found: ID does not exist" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.966635 4832 scope.go:117] "RemoveContainer" containerID="14671fcc486bd0477bf0692d0d90558270d5303bca014e1a92bf2efd2d476546" Jan 31 05:07:34 crc kubenswrapper[4832]: E0131 05:07:34.967443 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14671fcc486bd0477bf0692d0d90558270d5303bca014e1a92bf2efd2d476546\": container with ID starting with 14671fcc486bd0477bf0692d0d90558270d5303bca014e1a92bf2efd2d476546 not found: ID does not exist" containerID="14671fcc486bd0477bf0692d0d90558270d5303bca014e1a92bf2efd2d476546" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.967511 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14671fcc486bd0477bf0692d0d90558270d5303bca014e1a92bf2efd2d476546"} err="failed to get container status \"14671fcc486bd0477bf0692d0d90558270d5303bca014e1a92bf2efd2d476546\": rpc error: code = NotFound desc = could not find container \"14671fcc486bd0477bf0692d0d90558270d5303bca014e1a92bf2efd2d476546\": container with ID starting with 14671fcc486bd0477bf0692d0d90558270d5303bca014e1a92bf2efd2d476546 not found: ID does not exist" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.967547 4832 scope.go:117] "RemoveContainer" containerID="38edfde0b335473cc04d89bf3004cdcb2e14ac1fa8cfe57ca65be3d6ca47eb7e" Jan 31 05:07:34 crc kubenswrapper[4832]: E0131 05:07:34.968027 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38edfde0b335473cc04d89bf3004cdcb2e14ac1fa8cfe57ca65be3d6ca47eb7e\": container with ID starting with 38edfde0b335473cc04d89bf3004cdcb2e14ac1fa8cfe57ca65be3d6ca47eb7e not found: ID does not exist" containerID="38edfde0b335473cc04d89bf3004cdcb2e14ac1fa8cfe57ca65be3d6ca47eb7e" Jan 31 05:07:34 crc kubenswrapper[4832]: I0131 05:07:34.968058 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38edfde0b335473cc04d89bf3004cdcb2e14ac1fa8cfe57ca65be3d6ca47eb7e"} err="failed to get container status \"38edfde0b335473cc04d89bf3004cdcb2e14ac1fa8cfe57ca65be3d6ca47eb7e\": rpc error: code = NotFound desc = could not find container \"38edfde0b335473cc04d89bf3004cdcb2e14ac1fa8cfe57ca65be3d6ca47eb7e\": container with ID starting with 38edfde0b335473cc04d89bf3004cdcb2e14ac1fa8cfe57ca65be3d6ca47eb7e not found: ID does not exist" Jan 31 05:07:35 crc kubenswrapper[4832]: I0131 05:07:35.881514 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d68445fa-68fa-4f2e-b8cd-963a362bd6f4" path="/var/lib/kubelet/pods/d68445fa-68fa-4f2e-b8cd-963a362bd6f4/volumes" Jan 31 05:07:48 crc kubenswrapper[4832]: I0131 05:07:48.540246 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:07:48 crc kubenswrapper[4832]: I0131 05:07:48.541247 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:08:14 crc kubenswrapper[4832]: I0131 05:08:14.380697 4832 scope.go:117] "RemoveContainer" containerID="53b347cfd4d74f353c7aa1ec27042440313f790bec4c35668d351fd6187862ed" Jan 31 05:08:14 crc kubenswrapper[4832]: I0131 05:08:14.420848 4832 scope.go:117] "RemoveContainer" containerID="a3f0098f01cd9efd61d11f50e7f0d883ca02df7cb3f7e9eff6bc7a6171800fd4" Jan 31 05:08:18 crc kubenswrapper[4832]: I0131 05:08:18.540880 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:08:18 crc kubenswrapper[4832]: I0131 05:08:18.541600 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:08:48 crc kubenswrapper[4832]: I0131 05:08:48.540743 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:08:48 crc kubenswrapper[4832]: I0131 05:08:48.541381 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:08:48 crc kubenswrapper[4832]: I0131 05:08:48.541442 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 05:08:48 crc kubenswrapper[4832]: I0131 05:08:48.542442 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:08:48 crc kubenswrapper[4832]: I0131 05:08:48.542518 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" gracePeriod=600 Jan 31 05:08:48 crc kubenswrapper[4832]: E0131 05:08:48.668394 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:08:48 crc kubenswrapper[4832]: I0131 05:08:48.819326 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" exitCode=0 Jan 31 05:08:48 crc kubenswrapper[4832]: I0131 05:08:48.819388 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90"} Jan 31 05:08:48 crc kubenswrapper[4832]: I0131 05:08:48.819434 4832 scope.go:117] "RemoveContainer" containerID="544cd44b27a0e014964ac1b0ff9bce944334e127b15e2971994bb94547d5341e" Jan 31 05:08:48 crc kubenswrapper[4832]: I0131 05:08:48.820504 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:08:48 crc kubenswrapper[4832]: E0131 05:08:48.821033 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:08:59 crc kubenswrapper[4832]: I0131 05:08:59.860509 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:08:59 crc kubenswrapper[4832]: E0131 05:08:59.861956 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:09:12 crc kubenswrapper[4832]: I0131 05:09:12.859425 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:09:12 crc kubenswrapper[4832]: E0131 05:09:12.860754 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:09:27 crc kubenswrapper[4832]: I0131 05:09:27.860380 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:09:27 crc kubenswrapper[4832]: E0131 05:09:27.861994 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:09:31 crc kubenswrapper[4832]: I0131 05:09:31.446652 4832 generic.go:334] "Generic (PLEG): container finished" podID="27dc3183-5db8-4c94-8247-f5af07376737" containerID="48765ddacec3b15e92438d802ce8d255e6ae35182df2af00bb8a107fef08778f" exitCode=0 Jan 31 05:09:31 crc kubenswrapper[4832]: I0131 05:09:31.446782 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" event={"ID":"27dc3183-5db8-4c94-8247-f5af07376737","Type":"ContainerDied","Data":"48765ddacec3b15e92438d802ce8d255e6ae35182df2af00bb8a107fef08778f"} Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.612423 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5rbkf"] Jan 31 05:09:32 crc kubenswrapper[4832]: E0131 05:09:32.613878 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c74244-85a2-4e69-bc71-8db482c364ab" containerName="extract-utilities" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.613911 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c74244-85a2-4e69-bc71-8db482c364ab" containerName="extract-utilities" Jan 31 05:09:32 crc kubenswrapper[4832]: E0131 05:09:32.613940 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c74244-85a2-4e69-bc71-8db482c364ab" containerName="registry-server" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.613954 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c74244-85a2-4e69-bc71-8db482c364ab" containerName="registry-server" Jan 31 05:09:32 crc kubenswrapper[4832]: E0131 05:09:32.613982 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68445fa-68fa-4f2e-b8cd-963a362bd6f4" containerName="registry-server" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.613999 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68445fa-68fa-4f2e-b8cd-963a362bd6f4" containerName="registry-server" Jan 31 05:09:32 crc kubenswrapper[4832]: E0131 05:09:32.614020 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68445fa-68fa-4f2e-b8cd-963a362bd6f4" containerName="extract-utilities" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.614033 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68445fa-68fa-4f2e-b8cd-963a362bd6f4" containerName="extract-utilities" Jan 31 05:09:32 crc kubenswrapper[4832]: E0131 05:09:32.614078 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c74244-85a2-4e69-bc71-8db482c364ab" containerName="extract-content" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.614093 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c74244-85a2-4e69-bc71-8db482c364ab" containerName="extract-content" Jan 31 05:09:32 crc kubenswrapper[4832]: E0131 05:09:32.614116 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68445fa-68fa-4f2e-b8cd-963a362bd6f4" containerName="extract-content" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.614124 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68445fa-68fa-4f2e-b8cd-963a362bd6f4" containerName="extract-content" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.614355 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c74244-85a2-4e69-bc71-8db482c364ab" containerName="registry-server" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.614393 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d68445fa-68fa-4f2e-b8cd-963a362bd6f4" containerName="registry-server" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.616297 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.637102 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rbkf"] Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.760515 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx7hc\" (UniqueName: \"kubernetes.io/projected/a98ad638-16b2-4c46-b425-118a42072c8e-kube-api-access-vx7hc\") pod \"redhat-marketplace-5rbkf\" (UID: \"a98ad638-16b2-4c46-b425-118a42072c8e\") " pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.760609 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a98ad638-16b2-4c46-b425-118a42072c8e-utilities\") pod \"redhat-marketplace-5rbkf\" (UID: \"a98ad638-16b2-4c46-b425-118a42072c8e\") " pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.760634 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a98ad638-16b2-4c46-b425-118a42072c8e-catalog-content\") pod \"redhat-marketplace-5rbkf\" (UID: \"a98ad638-16b2-4c46-b425-118a42072c8e\") " pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.871492 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx7hc\" (UniqueName: \"kubernetes.io/projected/a98ad638-16b2-4c46-b425-118a42072c8e-kube-api-access-vx7hc\") pod \"redhat-marketplace-5rbkf\" (UID: \"a98ad638-16b2-4c46-b425-118a42072c8e\") " pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.871826 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a98ad638-16b2-4c46-b425-118a42072c8e-utilities\") pod \"redhat-marketplace-5rbkf\" (UID: \"a98ad638-16b2-4c46-b425-118a42072c8e\") " pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.871849 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a98ad638-16b2-4c46-b425-118a42072c8e-catalog-content\") pod \"redhat-marketplace-5rbkf\" (UID: \"a98ad638-16b2-4c46-b425-118a42072c8e\") " pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.873422 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a98ad638-16b2-4c46-b425-118a42072c8e-catalog-content\") pod \"redhat-marketplace-5rbkf\" (UID: \"a98ad638-16b2-4c46-b425-118a42072c8e\") " pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.874884 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a98ad638-16b2-4c46-b425-118a42072c8e-utilities\") pod \"redhat-marketplace-5rbkf\" (UID: \"a98ad638-16b2-4c46-b425-118a42072c8e\") " pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.901713 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx7hc\" (UniqueName: \"kubernetes.io/projected/a98ad638-16b2-4c46-b425-118a42072c8e-kube-api-access-vx7hc\") pod \"redhat-marketplace-5rbkf\" (UID: \"a98ad638-16b2-4c46-b425-118a42072c8e\") " pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:32 crc kubenswrapper[4832]: I0131 05:09:32.982296 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.021939 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:09:33 crc kubenswrapper[4832]: E0131 05:09:33.025447 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.074835 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbhg5\" (UniqueName: \"kubernetes.io/projected/27dc3183-5db8-4c94-8247-f5af07376737-kube-api-access-vbhg5\") pod \"27dc3183-5db8-4c94-8247-f5af07376737\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.074954 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-inventory\") pod \"27dc3183-5db8-4c94-8247-f5af07376737\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.080888 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27dc3183-5db8-4c94-8247-f5af07376737-kube-api-access-vbhg5" (OuterVolumeSpecName: "kube-api-access-vbhg5") pod "27dc3183-5db8-4c94-8247-f5af07376737" (UID: "27dc3183-5db8-4c94-8247-f5af07376737"). InnerVolumeSpecName "kube-api-access-vbhg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.111375 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-inventory" (OuterVolumeSpecName: "inventory") pod "27dc3183-5db8-4c94-8247-f5af07376737" (UID: "27dc3183-5db8-4c94-8247-f5af07376737"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.176462 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-bootstrap-combined-ca-bundle\") pod \"27dc3183-5db8-4c94-8247-f5af07376737\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.176959 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-ssh-key-openstack-edpm-ipam\") pod \"27dc3183-5db8-4c94-8247-f5af07376737\" (UID: \"27dc3183-5db8-4c94-8247-f5af07376737\") " Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.177655 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbhg5\" (UniqueName: \"kubernetes.io/projected/27dc3183-5db8-4c94-8247-f5af07376737-kube-api-access-vbhg5\") on node \"crc\" DevicePath \"\"" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.177667 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.180787 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "27dc3183-5db8-4c94-8247-f5af07376737" (UID: "27dc3183-5db8-4c94-8247-f5af07376737"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.206908 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "27dc3183-5db8-4c94-8247-f5af07376737" (UID: "27dc3183-5db8-4c94-8247-f5af07376737"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.280225 4832 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.280272 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27dc3183-5db8-4c94-8247-f5af07376737-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.476360 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" event={"ID":"27dc3183-5db8-4c94-8247-f5af07376737","Type":"ContainerDied","Data":"c00a74ebe5c111d936c7886e2a3db17237b0409bf639c88e7b6468a8dd691951"} Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.476416 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c00a74ebe5c111d936c7886e2a3db17237b0409bf639c88e7b6468a8dd691951" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.476450 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.572795 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rbkf"] Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.611471 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx"] Jan 31 05:09:33 crc kubenswrapper[4832]: E0131 05:09:33.612183 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27dc3183-5db8-4c94-8247-f5af07376737" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.612212 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="27dc3183-5db8-4c94-8247-f5af07376737" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.612485 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="27dc3183-5db8-4c94-8247-f5af07376737" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.613317 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.615880 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.615905 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.616699 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.616784 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.621771 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx"] Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.690133 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecaf2da0-d078-4810-9574-05b12bd09288-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx\" (UID: \"ecaf2da0-d078-4810-9574-05b12bd09288\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.690502 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecaf2da0-d078-4810-9574-05b12bd09288-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx\" (UID: \"ecaf2da0-d078-4810-9574-05b12bd09288\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.690548 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnw6n\" (UniqueName: \"kubernetes.io/projected/ecaf2da0-d078-4810-9574-05b12bd09288-kube-api-access-bnw6n\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx\" (UID: \"ecaf2da0-d078-4810-9574-05b12bd09288\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.793458 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnw6n\" (UniqueName: \"kubernetes.io/projected/ecaf2da0-d078-4810-9574-05b12bd09288-kube-api-access-bnw6n\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx\" (UID: \"ecaf2da0-d078-4810-9574-05b12bd09288\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.793854 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecaf2da0-d078-4810-9574-05b12bd09288-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx\" (UID: \"ecaf2da0-d078-4810-9574-05b12bd09288\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.793921 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecaf2da0-d078-4810-9574-05b12bd09288-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx\" (UID: \"ecaf2da0-d078-4810-9574-05b12bd09288\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.802078 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecaf2da0-d078-4810-9574-05b12bd09288-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx\" (UID: \"ecaf2da0-d078-4810-9574-05b12bd09288\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.802180 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecaf2da0-d078-4810-9574-05b12bd09288-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx\" (UID: \"ecaf2da0-d078-4810-9574-05b12bd09288\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.812026 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnw6n\" (UniqueName: \"kubernetes.io/projected/ecaf2da0-d078-4810-9574-05b12bd09288-kube-api-access-bnw6n\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx\" (UID: \"ecaf2da0-d078-4810-9574-05b12bd09288\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" Jan 31 05:09:33 crc kubenswrapper[4832]: I0131 05:09:33.963812 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" Jan 31 05:09:34 crc kubenswrapper[4832]: I0131 05:09:34.348808 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx"] Jan 31 05:09:34 crc kubenswrapper[4832]: W0131 05:09:34.356595 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecaf2da0_d078_4810_9574_05b12bd09288.slice/crio-d32fc897d65e1c9071ce4482850cca180797723fe448b9155f499e99d6a4117c WatchSource:0}: Error finding container d32fc897d65e1c9071ce4482850cca180797723fe448b9155f499e99d6a4117c: Status 404 returned error can't find the container with id d32fc897d65e1c9071ce4482850cca180797723fe448b9155f499e99d6a4117c Jan 31 05:09:34 crc kubenswrapper[4832]: I0131 05:09:34.488244 4832 generic.go:334] "Generic (PLEG): container finished" podID="a98ad638-16b2-4c46-b425-118a42072c8e" containerID="eb358ad2366e47bc80487a270a676e50ea1a00754840adefae04a8be10a07c3e" exitCode=0 Jan 31 05:09:34 crc kubenswrapper[4832]: I0131 05:09:34.488321 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rbkf" event={"ID":"a98ad638-16b2-4c46-b425-118a42072c8e","Type":"ContainerDied","Data":"eb358ad2366e47bc80487a270a676e50ea1a00754840adefae04a8be10a07c3e"} Jan 31 05:09:34 crc kubenswrapper[4832]: I0131 05:09:34.488356 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rbkf" event={"ID":"a98ad638-16b2-4c46-b425-118a42072c8e","Type":"ContainerStarted","Data":"217be7c496dabd04dedac1084af0b91bbdf4bafb9227ce88119dcccdae269777"} Jan 31 05:09:34 crc kubenswrapper[4832]: I0131 05:09:34.491425 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" event={"ID":"ecaf2da0-d078-4810-9574-05b12bd09288","Type":"ContainerStarted","Data":"d32fc897d65e1c9071ce4482850cca180797723fe448b9155f499e99d6a4117c"} Jan 31 05:09:35 crc kubenswrapper[4832]: I0131 05:09:35.507239 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" event={"ID":"ecaf2da0-d078-4810-9574-05b12bd09288","Type":"ContainerStarted","Data":"d912e736f764e84a8b86072924479ae07673a8e5822b1bc890fd51d2e332d7b1"} Jan 31 05:09:35 crc kubenswrapper[4832]: I0131 05:09:35.541356 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" podStartSLOduration=2.05323865 podStartE2EDuration="2.541327257s" podCreationTimestamp="2026-01-31 05:09:33 +0000 UTC" firstStartedPulling="2026-01-31 05:09:34.359408562 +0000 UTC m=+1583.308230237" lastFinishedPulling="2026-01-31 05:09:34.847497109 +0000 UTC m=+1583.796318844" observedRunningTime="2026-01-31 05:09:35.532340527 +0000 UTC m=+1584.481162262" watchObservedRunningTime="2026-01-31 05:09:35.541327257 +0000 UTC m=+1584.490148942" Jan 31 05:09:36 crc kubenswrapper[4832]: I0131 05:09:36.519791 4832 generic.go:334] "Generic (PLEG): container finished" podID="a98ad638-16b2-4c46-b425-118a42072c8e" containerID="0f2ebb38014d2e3e11ba5fe4120ac533a4b2ea9d2d9b5736794653c826e0e4a1" exitCode=0 Jan 31 05:09:36 crc kubenswrapper[4832]: I0131 05:09:36.519884 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rbkf" event={"ID":"a98ad638-16b2-4c46-b425-118a42072c8e","Type":"ContainerDied","Data":"0f2ebb38014d2e3e11ba5fe4120ac533a4b2ea9d2d9b5736794653c826e0e4a1"} Jan 31 05:09:37 crc kubenswrapper[4832]: I0131 05:09:37.532445 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rbkf" event={"ID":"a98ad638-16b2-4c46-b425-118a42072c8e","Type":"ContainerStarted","Data":"9b37d8337c7d7ab15fdd237fcd3233d28c5a479d24e2e039c568d6ec9d6f741f"} Jan 31 05:09:37 crc kubenswrapper[4832]: I0131 05:09:37.570278 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5rbkf" podStartSLOduration=3.071208456 podStartE2EDuration="5.570253771s" podCreationTimestamp="2026-01-31 05:09:32 +0000 UTC" firstStartedPulling="2026-01-31 05:09:34.491996904 +0000 UTC m=+1583.440818589" lastFinishedPulling="2026-01-31 05:09:36.991042219 +0000 UTC m=+1585.939863904" observedRunningTime="2026-01-31 05:09:37.556512964 +0000 UTC m=+1586.505334659" watchObservedRunningTime="2026-01-31 05:09:37.570253771 +0000 UTC m=+1586.519075466" Jan 31 05:09:42 crc kubenswrapper[4832]: I0131 05:09:42.860402 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:09:42 crc kubenswrapper[4832]: E0131 05:09:42.861622 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:09:42 crc kubenswrapper[4832]: I0131 05:09:42.985133 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:42 crc kubenswrapper[4832]: I0131 05:09:42.985229 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:43 crc kubenswrapper[4832]: I0131 05:09:43.060962 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:43 crc kubenswrapper[4832]: I0131 05:09:43.697075 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:43 crc kubenswrapper[4832]: I0131 05:09:43.772369 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rbkf"] Jan 31 05:09:45 crc kubenswrapper[4832]: I0131 05:09:45.646963 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5rbkf" podUID="a98ad638-16b2-4c46-b425-118a42072c8e" containerName="registry-server" containerID="cri-o://9b37d8337c7d7ab15fdd237fcd3233d28c5a479d24e2e039c568d6ec9d6f741f" gracePeriod=2 Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.205780 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.296720 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx7hc\" (UniqueName: \"kubernetes.io/projected/a98ad638-16b2-4c46-b425-118a42072c8e-kube-api-access-vx7hc\") pod \"a98ad638-16b2-4c46-b425-118a42072c8e\" (UID: \"a98ad638-16b2-4c46-b425-118a42072c8e\") " Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.296956 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a98ad638-16b2-4c46-b425-118a42072c8e-catalog-content\") pod \"a98ad638-16b2-4c46-b425-118a42072c8e\" (UID: \"a98ad638-16b2-4c46-b425-118a42072c8e\") " Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.297156 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a98ad638-16b2-4c46-b425-118a42072c8e-utilities\") pod \"a98ad638-16b2-4c46-b425-118a42072c8e\" (UID: \"a98ad638-16b2-4c46-b425-118a42072c8e\") " Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.297999 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a98ad638-16b2-4c46-b425-118a42072c8e-utilities" (OuterVolumeSpecName: "utilities") pod "a98ad638-16b2-4c46-b425-118a42072c8e" (UID: "a98ad638-16b2-4c46-b425-118a42072c8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.305387 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a98ad638-16b2-4c46-b425-118a42072c8e-kube-api-access-vx7hc" (OuterVolumeSpecName: "kube-api-access-vx7hc") pod "a98ad638-16b2-4c46-b425-118a42072c8e" (UID: "a98ad638-16b2-4c46-b425-118a42072c8e"). InnerVolumeSpecName "kube-api-access-vx7hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.327227 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a98ad638-16b2-4c46-b425-118a42072c8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a98ad638-16b2-4c46-b425-118a42072c8e" (UID: "a98ad638-16b2-4c46-b425-118a42072c8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.399557 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a98ad638-16b2-4c46-b425-118a42072c8e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.399619 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a98ad638-16b2-4c46-b425-118a42072c8e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.399638 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx7hc\" (UniqueName: \"kubernetes.io/projected/a98ad638-16b2-4c46-b425-118a42072c8e-kube-api-access-vx7hc\") on node \"crc\" DevicePath \"\"" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.669825 4832 generic.go:334] "Generic (PLEG): container finished" podID="a98ad638-16b2-4c46-b425-118a42072c8e" containerID="9b37d8337c7d7ab15fdd237fcd3233d28c5a479d24e2e039c568d6ec9d6f741f" exitCode=0 Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.669908 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rbkf" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.669901 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rbkf" event={"ID":"a98ad638-16b2-4c46-b425-118a42072c8e","Type":"ContainerDied","Data":"9b37d8337c7d7ab15fdd237fcd3233d28c5a479d24e2e039c568d6ec9d6f741f"} Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.670314 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rbkf" event={"ID":"a98ad638-16b2-4c46-b425-118a42072c8e","Type":"ContainerDied","Data":"217be7c496dabd04dedac1084af0b91bbdf4bafb9227ce88119dcccdae269777"} Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.670341 4832 scope.go:117] "RemoveContainer" containerID="9b37d8337c7d7ab15fdd237fcd3233d28c5a479d24e2e039c568d6ec9d6f741f" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.704931 4832 scope.go:117] "RemoveContainer" containerID="0f2ebb38014d2e3e11ba5fe4120ac533a4b2ea9d2d9b5736794653c826e0e4a1" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.716737 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rbkf"] Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.733982 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rbkf"] Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.734833 4832 scope.go:117] "RemoveContainer" containerID="eb358ad2366e47bc80487a270a676e50ea1a00754840adefae04a8be10a07c3e" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.788751 4832 scope.go:117] "RemoveContainer" containerID="9b37d8337c7d7ab15fdd237fcd3233d28c5a479d24e2e039c568d6ec9d6f741f" Jan 31 05:09:46 crc kubenswrapper[4832]: E0131 05:09:46.789221 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b37d8337c7d7ab15fdd237fcd3233d28c5a479d24e2e039c568d6ec9d6f741f\": container with ID starting with 9b37d8337c7d7ab15fdd237fcd3233d28c5a479d24e2e039c568d6ec9d6f741f not found: ID does not exist" containerID="9b37d8337c7d7ab15fdd237fcd3233d28c5a479d24e2e039c568d6ec9d6f741f" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.789302 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b37d8337c7d7ab15fdd237fcd3233d28c5a479d24e2e039c568d6ec9d6f741f"} err="failed to get container status \"9b37d8337c7d7ab15fdd237fcd3233d28c5a479d24e2e039c568d6ec9d6f741f\": rpc error: code = NotFound desc = could not find container \"9b37d8337c7d7ab15fdd237fcd3233d28c5a479d24e2e039c568d6ec9d6f741f\": container with ID starting with 9b37d8337c7d7ab15fdd237fcd3233d28c5a479d24e2e039c568d6ec9d6f741f not found: ID does not exist" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.789343 4832 scope.go:117] "RemoveContainer" containerID="0f2ebb38014d2e3e11ba5fe4120ac533a4b2ea9d2d9b5736794653c826e0e4a1" Jan 31 05:09:46 crc kubenswrapper[4832]: E0131 05:09:46.789822 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f2ebb38014d2e3e11ba5fe4120ac533a4b2ea9d2d9b5736794653c826e0e4a1\": container with ID starting with 0f2ebb38014d2e3e11ba5fe4120ac533a4b2ea9d2d9b5736794653c826e0e4a1 not found: ID does not exist" containerID="0f2ebb38014d2e3e11ba5fe4120ac533a4b2ea9d2d9b5736794653c826e0e4a1" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.789855 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f2ebb38014d2e3e11ba5fe4120ac533a4b2ea9d2d9b5736794653c826e0e4a1"} err="failed to get container status \"0f2ebb38014d2e3e11ba5fe4120ac533a4b2ea9d2d9b5736794653c826e0e4a1\": rpc error: code = NotFound desc = could not find container \"0f2ebb38014d2e3e11ba5fe4120ac533a4b2ea9d2d9b5736794653c826e0e4a1\": container with ID starting with 0f2ebb38014d2e3e11ba5fe4120ac533a4b2ea9d2d9b5736794653c826e0e4a1 not found: ID does not exist" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.789883 4832 scope.go:117] "RemoveContainer" containerID="eb358ad2366e47bc80487a270a676e50ea1a00754840adefae04a8be10a07c3e" Jan 31 05:09:46 crc kubenswrapper[4832]: E0131 05:09:46.790129 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb358ad2366e47bc80487a270a676e50ea1a00754840adefae04a8be10a07c3e\": container with ID starting with eb358ad2366e47bc80487a270a676e50ea1a00754840adefae04a8be10a07c3e not found: ID does not exist" containerID="eb358ad2366e47bc80487a270a676e50ea1a00754840adefae04a8be10a07c3e" Jan 31 05:09:46 crc kubenswrapper[4832]: I0131 05:09:46.790165 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb358ad2366e47bc80487a270a676e50ea1a00754840adefae04a8be10a07c3e"} err="failed to get container status \"eb358ad2366e47bc80487a270a676e50ea1a00754840adefae04a8be10a07c3e\": rpc error: code = NotFound desc = could not find container \"eb358ad2366e47bc80487a270a676e50ea1a00754840adefae04a8be10a07c3e\": container with ID starting with eb358ad2366e47bc80487a270a676e50ea1a00754840adefae04a8be10a07c3e not found: ID does not exist" Jan 31 05:09:47 crc kubenswrapper[4832]: I0131 05:09:47.879223 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a98ad638-16b2-4c46-b425-118a42072c8e" path="/var/lib/kubelet/pods/a98ad638-16b2-4c46-b425-118a42072c8e/volumes" Jan 31 05:09:54 crc kubenswrapper[4832]: I0131 05:09:54.860353 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:09:54 crc kubenswrapper[4832]: E0131 05:09:54.861503 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:10:05 crc kubenswrapper[4832]: I0131 05:10:05.860010 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:10:05 crc kubenswrapper[4832]: E0131 05:10:05.861043 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:10:14 crc kubenswrapper[4832]: I0131 05:10:14.596248 4832 scope.go:117] "RemoveContainer" containerID="bdcf5b3e67cc08a88be96a740cd87029e3d9ef7e02f9ae8f19161679b1ac6374" Jan 31 05:10:14 crc kubenswrapper[4832]: I0131 05:10:14.629243 4832 scope.go:117] "RemoveContainer" containerID="d35fa820c2d7ff7316dd790325fc573ff5f45193a6b85ae0a592c072f1f188d4" Jan 31 05:10:20 crc kubenswrapper[4832]: I0131 05:10:20.864459 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:10:20 crc kubenswrapper[4832]: E0131 05:10:20.865295 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:10:30 crc kubenswrapper[4832]: I0131 05:10:30.072165 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-72zzj"] Jan 31 05:10:30 crc kubenswrapper[4832]: I0131 05:10:30.086341 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-40db-account-create-update-26dfv"] Jan 31 05:10:30 crc kubenswrapper[4832]: I0131 05:10:30.095456 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-72zzj"] Jan 31 05:10:30 crc kubenswrapper[4832]: I0131 05:10:30.106674 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-40db-account-create-update-26dfv"] Jan 31 05:10:31 crc kubenswrapper[4832]: I0131 05:10:31.873952 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b9dc67-5aba-4ee4-9771-78e9ad27206c" path="/var/lib/kubelet/pods/c7b9dc67-5aba-4ee4-9771-78e9ad27206c/volumes" Jan 31 05:10:31 crc kubenswrapper[4832]: I0131 05:10:31.875230 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf835b76-ea7d-499a-b532-229186e9e54f" path="/var/lib/kubelet/pods/cf835b76-ea7d-499a-b532-229186e9e54f/volumes" Jan 31 05:10:32 crc kubenswrapper[4832]: I0131 05:10:32.859403 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:10:32 crc kubenswrapper[4832]: E0131 05:10:32.860232 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:10:35 crc kubenswrapper[4832]: I0131 05:10:35.070250 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-4shhs"] Jan 31 05:10:35 crc kubenswrapper[4832]: I0131 05:10:35.087648 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-67f6-account-create-update-b6kd5"] Jan 31 05:10:35 crc kubenswrapper[4832]: I0131 05:10:35.100162 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-167e-account-create-update-2qthj"] Jan 31 05:10:35 crc kubenswrapper[4832]: I0131 05:10:35.113424 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-4shhs"] Jan 31 05:10:35 crc kubenswrapper[4832]: I0131 05:10:35.123224 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-m4zgg"] Jan 31 05:10:35 crc kubenswrapper[4832]: I0131 05:10:35.132544 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-67f6-account-create-update-b6kd5"] Jan 31 05:10:35 crc kubenswrapper[4832]: I0131 05:10:35.143752 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-167e-account-create-update-2qthj"] Jan 31 05:10:35 crc kubenswrapper[4832]: I0131 05:10:35.152510 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-m4zgg"] Jan 31 05:10:35 crc kubenswrapper[4832]: I0131 05:10:35.879001 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11927687-2cdb-407c-900c-6ed0a23438a9" path="/var/lib/kubelet/pods/11927687-2cdb-407c-900c-6ed0a23438a9/volumes" Jan 31 05:10:35 crc kubenswrapper[4832]: I0131 05:10:35.880539 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d3f7cb-9922-4e2e-91e3-7a14015cadc2" path="/var/lib/kubelet/pods/30d3f7cb-9922-4e2e-91e3-7a14015cadc2/volumes" Jan 31 05:10:35 crc kubenswrapper[4832]: I0131 05:10:35.881890 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72ffd399-2a10-4bc8-a99b-7b1f472193bc" path="/var/lib/kubelet/pods/72ffd399-2a10-4bc8-a99b-7b1f472193bc/volumes" Jan 31 05:10:35 crc kubenswrapper[4832]: I0131 05:10:35.883005 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889" path="/var/lib/kubelet/pods/cfaf1fcd-6b83-4fc4-a3eb-54a2c513d889/volumes" Jan 31 05:10:45 crc kubenswrapper[4832]: I0131 05:10:45.860763 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:10:45 crc kubenswrapper[4832]: E0131 05:10:45.861832 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:10:54 crc kubenswrapper[4832]: I0131 05:10:54.075119 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-m2t2g"] Jan 31 05:10:54 crc kubenswrapper[4832]: I0131 05:10:54.087253 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-t6rff"] Jan 31 05:10:54 crc kubenswrapper[4832]: I0131 05:10:54.099219 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-25g4q"] Jan 31 05:10:54 crc kubenswrapper[4832]: I0131 05:10:54.109141 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qbknh"] Jan 31 05:10:54 crc kubenswrapper[4832]: I0131 05:10:54.117097 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-m2t2g"] Jan 31 05:10:54 crc kubenswrapper[4832]: I0131 05:10:54.124454 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-t6rff"] Jan 31 05:10:54 crc kubenswrapper[4832]: I0131 05:10:54.132003 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-25g4q"] Jan 31 05:10:54 crc kubenswrapper[4832]: I0131 05:10:54.139164 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qbknh"] Jan 31 05:10:55 crc kubenswrapper[4832]: I0131 05:10:55.872036 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17737fb8-509f-41f5-ba32-e7079f79839c" path="/var/lib/kubelet/pods/17737fb8-509f-41f5-ba32-e7079f79839c/volumes" Jan 31 05:10:55 crc kubenswrapper[4832]: I0131 05:10:55.874231 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25b8bcc6-a968-4bfc-ba6c-7431d3f41deb" path="/var/lib/kubelet/pods/25b8bcc6-a968-4bfc-ba6c-7431d3f41deb/volumes" Jan 31 05:10:55 crc kubenswrapper[4832]: I0131 05:10:55.874903 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9b042eb-c890-4f83-96c3-a1f2fbd6d712" path="/var/lib/kubelet/pods/d9b042eb-c890-4f83-96c3-a1f2fbd6d712/volumes" Jan 31 05:10:55 crc kubenswrapper[4832]: I0131 05:10:55.875509 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8216556-80f1-47a9-8a8c-d01c57f8dd71" path="/var/lib/kubelet/pods/e8216556-80f1-47a9-8a8c-d01c57f8dd71/volumes" Jan 31 05:10:58 crc kubenswrapper[4832]: I0131 05:10:58.057120 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2130-account-create-update-hsf5h"] Jan 31 05:10:58 crc kubenswrapper[4832]: I0131 05:10:58.072643 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-1920-account-create-update-kjz5k"] Jan 31 05:10:58 crc kubenswrapper[4832]: I0131 05:10:58.082586 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a5be-account-create-update-r78dv"] Jan 31 05:10:58 crc kubenswrapper[4832]: I0131 05:10:58.094776 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-1920-account-create-update-kjz5k"] Jan 31 05:10:58 crc kubenswrapper[4832]: I0131 05:10:58.106277 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2130-account-create-update-hsf5h"] Jan 31 05:10:58 crc kubenswrapper[4832]: I0131 05:10:58.114576 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a5be-account-create-update-r78dv"] Jan 31 05:10:59 crc kubenswrapper[4832]: I0131 05:10:59.859717 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:10:59 crc kubenswrapper[4832]: E0131 05:10:59.860367 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:10:59 crc kubenswrapper[4832]: I0131 05:10:59.880460 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f6fea4b-c60e-4068-9814-50caf6f127aa" path="/var/lib/kubelet/pods/2f6fea4b-c60e-4068-9814-50caf6f127aa/volumes" Jan 31 05:10:59 crc kubenswrapper[4832]: I0131 05:10:59.881728 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="595cc66d-63ae-4fe8-bf8a-457a6dd22bdf" path="/var/lib/kubelet/pods/595cc66d-63ae-4fe8-bf8a-457a6dd22bdf/volumes" Jan 31 05:10:59 crc kubenswrapper[4832]: I0131 05:10:59.882896 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dac9656f-3e3a-486c-a90e-94162a824223" path="/var/lib/kubelet/pods/dac9656f-3e3a-486c-a90e-94162a824223/volumes" Jan 31 05:11:01 crc kubenswrapper[4832]: I0131 05:11:01.041000 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-92w4s"] Jan 31 05:11:01 crc kubenswrapper[4832]: I0131 05:11:01.053945 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-92w4s"] Jan 31 05:11:01 crc kubenswrapper[4832]: I0131 05:11:01.873921 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffef7600-94e5-444a-be7e-215a512c0233" path="/var/lib/kubelet/pods/ffef7600-94e5-444a-be7e-215a512c0233/volumes" Jan 31 05:11:02 crc kubenswrapper[4832]: I0131 05:11:02.583131 4832 generic.go:334] "Generic (PLEG): container finished" podID="ecaf2da0-d078-4810-9574-05b12bd09288" containerID="d912e736f764e84a8b86072924479ae07673a8e5822b1bc890fd51d2e332d7b1" exitCode=0 Jan 31 05:11:02 crc kubenswrapper[4832]: I0131 05:11:02.583243 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" event={"ID":"ecaf2da0-d078-4810-9574-05b12bd09288","Type":"ContainerDied","Data":"d912e736f764e84a8b86072924479ae07673a8e5822b1bc890fd51d2e332d7b1"} Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.126734 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.212850 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecaf2da0-d078-4810-9574-05b12bd09288-ssh-key-openstack-edpm-ipam\") pod \"ecaf2da0-d078-4810-9574-05b12bd09288\" (UID: \"ecaf2da0-d078-4810-9574-05b12bd09288\") " Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.213694 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnw6n\" (UniqueName: \"kubernetes.io/projected/ecaf2da0-d078-4810-9574-05b12bd09288-kube-api-access-bnw6n\") pod \"ecaf2da0-d078-4810-9574-05b12bd09288\" (UID: \"ecaf2da0-d078-4810-9574-05b12bd09288\") " Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.213962 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecaf2da0-d078-4810-9574-05b12bd09288-inventory\") pod \"ecaf2da0-d078-4810-9574-05b12bd09288\" (UID: \"ecaf2da0-d078-4810-9574-05b12bd09288\") " Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.222427 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecaf2da0-d078-4810-9574-05b12bd09288-kube-api-access-bnw6n" (OuterVolumeSpecName: "kube-api-access-bnw6n") pod "ecaf2da0-d078-4810-9574-05b12bd09288" (UID: "ecaf2da0-d078-4810-9574-05b12bd09288"). InnerVolumeSpecName "kube-api-access-bnw6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.245774 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecaf2da0-d078-4810-9574-05b12bd09288-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ecaf2da0-d078-4810-9574-05b12bd09288" (UID: "ecaf2da0-d078-4810-9574-05b12bd09288"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.256853 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecaf2da0-d078-4810-9574-05b12bd09288-inventory" (OuterVolumeSpecName: "inventory") pod "ecaf2da0-d078-4810-9574-05b12bd09288" (UID: "ecaf2da0-d078-4810-9574-05b12bd09288"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.317004 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecaf2da0-d078-4810-9574-05b12bd09288-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.317041 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecaf2da0-d078-4810-9574-05b12bd09288-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.317053 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnw6n\" (UniqueName: \"kubernetes.io/projected/ecaf2da0-d078-4810-9574-05b12bd09288-kube-api-access-bnw6n\") on node \"crc\" DevicePath \"\"" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.612051 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" event={"ID":"ecaf2da0-d078-4810-9574-05b12bd09288","Type":"ContainerDied","Data":"d32fc897d65e1c9071ce4482850cca180797723fe448b9155f499e99d6a4117c"} Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.612123 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d32fc897d65e1c9071ce4482850cca180797723fe448b9155f499e99d6a4117c" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.612216 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.744074 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4"] Jan 31 05:11:04 crc kubenswrapper[4832]: E0131 05:11:04.744448 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a98ad638-16b2-4c46-b425-118a42072c8e" containerName="extract-utilities" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.744465 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a98ad638-16b2-4c46-b425-118a42072c8e" containerName="extract-utilities" Jan 31 05:11:04 crc kubenswrapper[4832]: E0131 05:11:04.744495 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecaf2da0-d078-4810-9574-05b12bd09288" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.744505 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecaf2da0-d078-4810-9574-05b12bd09288" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 31 05:11:04 crc kubenswrapper[4832]: E0131 05:11:04.744520 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a98ad638-16b2-4c46-b425-118a42072c8e" containerName="registry-server" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.744527 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a98ad638-16b2-4c46-b425-118a42072c8e" containerName="registry-server" Jan 31 05:11:04 crc kubenswrapper[4832]: E0131 05:11:04.744550 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a98ad638-16b2-4c46-b425-118a42072c8e" containerName="extract-content" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.744573 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a98ad638-16b2-4c46-b425-118a42072c8e" containerName="extract-content" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.744742 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecaf2da0-d078-4810-9574-05b12bd09288" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.744765 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a98ad638-16b2-4c46-b425-118a42072c8e" containerName="registry-server" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.745412 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.748154 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.749137 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.749247 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.750847 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.774300 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4"] Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.827477 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b016223-cc19-45ea-9ccb-fc81103e1e5f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4\" (UID: \"3b016223-cc19-45ea-9ccb-fc81103e1e5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.827658 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc6zf\" (UniqueName: \"kubernetes.io/projected/3b016223-cc19-45ea-9ccb-fc81103e1e5f-kube-api-access-dc6zf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4\" (UID: \"3b016223-cc19-45ea-9ccb-fc81103e1e5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.827746 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b016223-cc19-45ea-9ccb-fc81103e1e5f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4\" (UID: \"3b016223-cc19-45ea-9ccb-fc81103e1e5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.929657 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b016223-cc19-45ea-9ccb-fc81103e1e5f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4\" (UID: \"3b016223-cc19-45ea-9ccb-fc81103e1e5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.929786 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc6zf\" (UniqueName: \"kubernetes.io/projected/3b016223-cc19-45ea-9ccb-fc81103e1e5f-kube-api-access-dc6zf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4\" (UID: \"3b016223-cc19-45ea-9ccb-fc81103e1e5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.929828 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b016223-cc19-45ea-9ccb-fc81103e1e5f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4\" (UID: \"3b016223-cc19-45ea-9ccb-fc81103e1e5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.934986 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b016223-cc19-45ea-9ccb-fc81103e1e5f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4\" (UID: \"3b016223-cc19-45ea-9ccb-fc81103e1e5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.936036 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b016223-cc19-45ea-9ccb-fc81103e1e5f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4\" (UID: \"3b016223-cc19-45ea-9ccb-fc81103e1e5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" Jan 31 05:11:04 crc kubenswrapper[4832]: I0131 05:11:04.952656 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc6zf\" (UniqueName: \"kubernetes.io/projected/3b016223-cc19-45ea-9ccb-fc81103e1e5f-kube-api-access-dc6zf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4\" (UID: \"3b016223-cc19-45ea-9ccb-fc81103e1e5f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" Jan 31 05:11:05 crc kubenswrapper[4832]: I0131 05:11:05.068896 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-wrfck"] Jan 31 05:11:05 crc kubenswrapper[4832]: I0131 05:11:05.069786 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" Jan 31 05:11:05 crc kubenswrapper[4832]: I0131 05:11:05.079900 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-wrfck"] Jan 31 05:11:05 crc kubenswrapper[4832]: I0131 05:11:05.631511 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4"] Jan 31 05:11:05 crc kubenswrapper[4832]: I0131 05:11:05.643463 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:11:05 crc kubenswrapper[4832]: I0131 05:11:05.872658 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d6dbe3e-1852-493a-926a-95d85495da09" path="/var/lib/kubelet/pods/4d6dbe3e-1852-493a-926a-95d85495da09/volumes" Jan 31 05:11:06 crc kubenswrapper[4832]: I0131 05:11:06.643222 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" event={"ID":"3b016223-cc19-45ea-9ccb-fc81103e1e5f","Type":"ContainerStarted","Data":"aa12d4ed94a9fe3b0f5bfb701d246084146f83f76bfc766520e40435a65d3491"} Jan 31 05:11:06 crc kubenswrapper[4832]: I0131 05:11:06.643786 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" event={"ID":"3b016223-cc19-45ea-9ccb-fc81103e1e5f","Type":"ContainerStarted","Data":"a880dae3e6c4a030e6e80828bc346a2d84175c82ce1941cef85f3aa50fceced1"} Jan 31 05:11:06 crc kubenswrapper[4832]: I0131 05:11:06.673754 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" podStartSLOduration=2.204252976 podStartE2EDuration="2.673718916s" podCreationTimestamp="2026-01-31 05:11:04 +0000 UTC" firstStartedPulling="2026-01-31 05:11:05.643202069 +0000 UTC m=+1674.592023754" lastFinishedPulling="2026-01-31 05:11:06.112667979 +0000 UTC m=+1675.061489694" observedRunningTime="2026-01-31 05:11:06.668586026 +0000 UTC m=+1675.617407751" watchObservedRunningTime="2026-01-31 05:11:06.673718916 +0000 UTC m=+1675.622540651" Jan 31 05:11:12 crc kubenswrapper[4832]: I0131 05:11:12.859174 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:11:12 crc kubenswrapper[4832]: E0131 05:11:12.860271 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:11:14 crc kubenswrapper[4832]: I0131 05:11:14.715120 4832 scope.go:117] "RemoveContainer" containerID="a5493cc636deef6707a12f3b7e5907e8e15978c00f318e2d7df3142bbe7d8a2b" Jan 31 05:11:14 crc kubenswrapper[4832]: I0131 05:11:14.747129 4832 scope.go:117] "RemoveContainer" containerID="d019be7a2969f0359ad46a779bdd191b7e10d045e420e0eeacb7283ed983a3ab" Jan 31 05:11:14 crc kubenswrapper[4832]: I0131 05:11:14.837514 4832 scope.go:117] "RemoveContainer" containerID="98db13dbe1176e447b3199cd7e1f4dcb0f0be4fd50b1f933d8e0bd98860dc12b" Jan 31 05:11:14 crc kubenswrapper[4832]: I0131 05:11:14.875747 4832 scope.go:117] "RemoveContainer" containerID="494c402f53aab4347a291134569f5319b7c69ad36ad5a3ede27303b93b38cc1c" Jan 31 05:11:14 crc kubenswrapper[4832]: I0131 05:11:14.935991 4832 scope.go:117] "RemoveContainer" containerID="0c85861e44b44826382484c91f4f82ed13a1eb4bdd6030f2582e4d436b533947" Jan 31 05:11:14 crc kubenswrapper[4832]: I0131 05:11:14.963770 4832 scope.go:117] "RemoveContainer" containerID="bc9ccf48fe0badbd78f5b2ad5ea3f295a94ba77b3dc9155e92768422604bd507" Jan 31 05:11:15 crc kubenswrapper[4832]: I0131 05:11:15.013071 4832 scope.go:117] "RemoveContainer" containerID="bb0f0186b50c9eca7cc4e7874b8dd3d3df80f652db5e3cfed202595e39092956" Jan 31 05:11:15 crc kubenswrapper[4832]: I0131 05:11:15.054482 4832 scope.go:117] "RemoveContainer" containerID="520cceb62a925e47641f0ea1f6b1a784ebe340c38e4c4d88f6b0fbeeec04936d" Jan 31 05:11:15 crc kubenswrapper[4832]: I0131 05:11:15.085277 4832 scope.go:117] "RemoveContainer" containerID="7ab17ce58d44c113ee06e7986521b56c6c12f4da917ece9486b834430f041ea4" Jan 31 05:11:15 crc kubenswrapper[4832]: I0131 05:11:15.109747 4832 scope.go:117] "RemoveContainer" containerID="0eb48ae8fc75cfa70b4a70909ba66ff101a56e28766e4650ea85ff0c0d0018f3" Jan 31 05:11:15 crc kubenswrapper[4832]: I0131 05:11:15.133091 4832 scope.go:117] "RemoveContainer" containerID="0627e10c9e962a1dc681a4ca9b3c1bedc80be1ead41b0d7c81e4b67e50bd6728" Jan 31 05:11:15 crc kubenswrapper[4832]: I0131 05:11:15.160120 4832 scope.go:117] "RemoveContainer" containerID="9312524546f2f975c832ce2d431e9575dd4ec53bc3c7a0d330edaef8c7f8fa8e" Jan 31 05:11:15 crc kubenswrapper[4832]: I0131 05:11:15.185480 4832 scope.go:117] "RemoveContainer" containerID="14366572e215d4f19df3cc6d7a77394e35ec73188a127cc4fc777f7959c0bf06" Jan 31 05:11:15 crc kubenswrapper[4832]: I0131 05:11:15.207380 4832 scope.go:117] "RemoveContainer" containerID="60ce1c55ec6f2cd618e37fdfeb138e0ad43f70aa49603ee8cc8109e67016ddb6" Jan 31 05:11:15 crc kubenswrapper[4832]: I0131 05:11:15.236252 4832 scope.go:117] "RemoveContainer" containerID="090005c53702850a249d5b313f6fe6741e475ae68e85871f129d47bafdd137da" Jan 31 05:11:27 crc kubenswrapper[4832]: I0131 05:11:27.860957 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:11:27 crc kubenswrapper[4832]: E0131 05:11:27.862715 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:11:42 crc kubenswrapper[4832]: I0131 05:11:42.859345 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:11:42 crc kubenswrapper[4832]: E0131 05:11:42.860623 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:11:43 crc kubenswrapper[4832]: I0131 05:11:43.050102 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9jhp9"] Jan 31 05:11:43 crc kubenswrapper[4832]: I0131 05:11:43.063910 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9jhp9"] Jan 31 05:11:43 crc kubenswrapper[4832]: I0131 05:11:43.870645 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c30239-67eb-44a5-83cc-dbec561dade8" path="/var/lib/kubelet/pods/14c30239-67eb-44a5-83cc-dbec561dade8/volumes" Jan 31 05:11:49 crc kubenswrapper[4832]: I0131 05:11:49.048079 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-nvbkz"] Jan 31 05:11:49 crc kubenswrapper[4832]: I0131 05:11:49.056305 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-nvbkz"] Jan 31 05:11:49 crc kubenswrapper[4832]: I0131 05:11:49.877950 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a8a734d-c7f4-4fd7-b64f-a053592ee909" path="/var/lib/kubelet/pods/2a8a734d-c7f4-4fd7-b64f-a053592ee909/volumes" Jan 31 05:11:54 crc kubenswrapper[4832]: I0131 05:11:54.860437 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:11:54 crc kubenswrapper[4832]: E0131 05:11:54.861275 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:11:59 crc kubenswrapper[4832]: I0131 05:11:59.043340 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-r7sxw"] Jan 31 05:11:59 crc kubenswrapper[4832]: I0131 05:11:59.055249 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-r7sxw"] Jan 31 05:11:59 crc kubenswrapper[4832]: I0131 05:11:59.878420 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a775d6b-5610-4b98-a570-8e98c9cadfd2" path="/var/lib/kubelet/pods/4a775d6b-5610-4b98-a570-8e98c9cadfd2/volumes" Jan 31 05:12:04 crc kubenswrapper[4832]: I0131 05:12:04.035173 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-z4rhk"] Jan 31 05:12:04 crc kubenswrapper[4832]: I0131 05:12:04.043267 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-z4rhk"] Jan 31 05:12:05 crc kubenswrapper[4832]: I0131 05:12:05.860026 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:12:05 crc kubenswrapper[4832]: E0131 05:12:05.860538 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:12:05 crc kubenswrapper[4832]: I0131 05:12:05.886267 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1ef32d-8b91-4c1e-b5d1-31a582df6f36" path="/var/lib/kubelet/pods/ba1ef32d-8b91-4c1e-b5d1-31a582df6f36/volumes" Jan 31 05:12:10 crc kubenswrapper[4832]: I0131 05:12:10.080649 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7r8nc"] Jan 31 05:12:10 crc kubenswrapper[4832]: I0131 05:12:10.101085 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7r8nc"] Jan 31 05:12:11 crc kubenswrapper[4832]: I0131 05:12:11.882171 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcc76655-d4cd-47c7-be0c-21e52514fe92" path="/var/lib/kubelet/pods/bcc76655-d4cd-47c7-be0c-21e52514fe92/volumes" Jan 31 05:12:15 crc kubenswrapper[4832]: I0131 05:12:15.578919 4832 scope.go:117] "RemoveContainer" containerID="1b40b92839680cdf521e100a2fa3b6eb94638df0fe404a0b4df5a594982ee426" Jan 31 05:12:15 crc kubenswrapper[4832]: I0131 05:12:15.630689 4832 scope.go:117] "RemoveContainer" containerID="00a94f0854a8de1520bd9fd0bc51391272f8127faefbdd237bf99066c50d75f6" Jan 31 05:12:15 crc kubenswrapper[4832]: I0131 05:12:15.682399 4832 scope.go:117] "RemoveContainer" containerID="8f9daae95a73bd91a84eeb49f287a1705462ff7d7d9b0f3cd67b627a09d0fc84" Jan 31 05:12:15 crc kubenswrapper[4832]: I0131 05:12:15.716618 4832 scope.go:117] "RemoveContainer" containerID="771972d1658a47aef4a32953aea27c1123597ac57dbd26b26ec77f854939109f" Jan 31 05:12:15 crc kubenswrapper[4832]: I0131 05:12:15.792756 4832 scope.go:117] "RemoveContainer" containerID="7b00b61e86023088c49b00534964dafa1ae8cd8796cdf096fe53588e0f7cdd38" Jan 31 05:12:16 crc kubenswrapper[4832]: I0131 05:12:16.478545 4832 generic.go:334] "Generic (PLEG): container finished" podID="3b016223-cc19-45ea-9ccb-fc81103e1e5f" containerID="aa12d4ed94a9fe3b0f5bfb701d246084146f83f76bfc766520e40435a65d3491" exitCode=0 Jan 31 05:12:16 crc kubenswrapper[4832]: I0131 05:12:16.478613 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" event={"ID":"3b016223-cc19-45ea-9ccb-fc81103e1e5f","Type":"ContainerDied","Data":"aa12d4ed94a9fe3b0f5bfb701d246084146f83f76bfc766520e40435a65d3491"} Jan 31 05:12:17 crc kubenswrapper[4832]: I0131 05:12:17.860026 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:12:17 crc kubenswrapper[4832]: E0131 05:12:17.861053 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.023356 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.181070 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b016223-cc19-45ea-9ccb-fc81103e1e5f-ssh-key-openstack-edpm-ipam\") pod \"3b016223-cc19-45ea-9ccb-fc81103e1e5f\" (UID: \"3b016223-cc19-45ea-9ccb-fc81103e1e5f\") " Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.181121 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc6zf\" (UniqueName: \"kubernetes.io/projected/3b016223-cc19-45ea-9ccb-fc81103e1e5f-kube-api-access-dc6zf\") pod \"3b016223-cc19-45ea-9ccb-fc81103e1e5f\" (UID: \"3b016223-cc19-45ea-9ccb-fc81103e1e5f\") " Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.181171 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b016223-cc19-45ea-9ccb-fc81103e1e5f-inventory\") pod \"3b016223-cc19-45ea-9ccb-fc81103e1e5f\" (UID: \"3b016223-cc19-45ea-9ccb-fc81103e1e5f\") " Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.192757 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b016223-cc19-45ea-9ccb-fc81103e1e5f-kube-api-access-dc6zf" (OuterVolumeSpecName: "kube-api-access-dc6zf") pod "3b016223-cc19-45ea-9ccb-fc81103e1e5f" (UID: "3b016223-cc19-45ea-9ccb-fc81103e1e5f"). InnerVolumeSpecName "kube-api-access-dc6zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.209287 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b016223-cc19-45ea-9ccb-fc81103e1e5f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b016223-cc19-45ea-9ccb-fc81103e1e5f" (UID: "3b016223-cc19-45ea-9ccb-fc81103e1e5f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.217324 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b016223-cc19-45ea-9ccb-fc81103e1e5f-inventory" (OuterVolumeSpecName: "inventory") pod "3b016223-cc19-45ea-9ccb-fc81103e1e5f" (UID: "3b016223-cc19-45ea-9ccb-fc81103e1e5f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.283234 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b016223-cc19-45ea-9ccb-fc81103e1e5f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.283273 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc6zf\" (UniqueName: \"kubernetes.io/projected/3b016223-cc19-45ea-9ccb-fc81103e1e5f-kube-api-access-dc6zf\") on node \"crc\" DevicePath \"\"" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.283288 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b016223-cc19-45ea-9ccb-fc81103e1e5f-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.502626 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.502632 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4" event={"ID":"3b016223-cc19-45ea-9ccb-fc81103e1e5f","Type":"ContainerDied","Data":"a880dae3e6c4a030e6e80828bc346a2d84175c82ce1941cef85f3aa50fceced1"} Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.502988 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a880dae3e6c4a030e6e80828bc346a2d84175c82ce1941cef85f3aa50fceced1" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.659347 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc"] Jan 31 05:12:18 crc kubenswrapper[4832]: E0131 05:12:18.659774 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b016223-cc19-45ea-9ccb-fc81103e1e5f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.659789 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b016223-cc19-45ea-9ccb-fc81103e1e5f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.659941 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b016223-cc19-45ea-9ccb-fc81103e1e5f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.660572 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.667458 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.667709 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.667892 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.668259 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.689640 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc"] Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.796245 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/945ad601-23f1-4494-a2a8-6bf53b841d2f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc\" (UID: \"945ad601-23f1-4494-a2a8-6bf53b841d2f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.796586 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsmkw\" (UniqueName: \"kubernetes.io/projected/945ad601-23f1-4494-a2a8-6bf53b841d2f-kube-api-access-nsmkw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc\" (UID: \"945ad601-23f1-4494-a2a8-6bf53b841d2f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.796733 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/945ad601-23f1-4494-a2a8-6bf53b841d2f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc\" (UID: \"945ad601-23f1-4494-a2a8-6bf53b841d2f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.898452 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/945ad601-23f1-4494-a2a8-6bf53b841d2f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc\" (UID: \"945ad601-23f1-4494-a2a8-6bf53b841d2f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.898503 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsmkw\" (UniqueName: \"kubernetes.io/projected/945ad601-23f1-4494-a2a8-6bf53b841d2f-kube-api-access-nsmkw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc\" (UID: \"945ad601-23f1-4494-a2a8-6bf53b841d2f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.898603 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/945ad601-23f1-4494-a2a8-6bf53b841d2f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc\" (UID: \"945ad601-23f1-4494-a2a8-6bf53b841d2f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.902016 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/945ad601-23f1-4494-a2a8-6bf53b841d2f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc\" (UID: \"945ad601-23f1-4494-a2a8-6bf53b841d2f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.902126 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/945ad601-23f1-4494-a2a8-6bf53b841d2f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc\" (UID: \"945ad601-23f1-4494-a2a8-6bf53b841d2f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.914652 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsmkw\" (UniqueName: \"kubernetes.io/projected/945ad601-23f1-4494-a2a8-6bf53b841d2f-kube-api-access-nsmkw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc\" (UID: \"945ad601-23f1-4494-a2a8-6bf53b841d2f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" Jan 31 05:12:18 crc kubenswrapper[4832]: I0131 05:12:18.983110 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" Jan 31 05:12:19 crc kubenswrapper[4832]: I0131 05:12:19.607997 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc"] Jan 31 05:12:20 crc kubenswrapper[4832]: I0131 05:12:20.527271 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" event={"ID":"945ad601-23f1-4494-a2a8-6bf53b841d2f","Type":"ContainerStarted","Data":"187f34304e072c6798b61436ad560ec1930b5c5b99207dcb98c1bc93139fb215"} Jan 31 05:12:20 crc kubenswrapper[4832]: I0131 05:12:20.527639 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" event={"ID":"945ad601-23f1-4494-a2a8-6bf53b841d2f","Type":"ContainerStarted","Data":"ab812085322d9c73b09fd7c5e9ac83a47b6a104e63a115f5c311e983209f6884"} Jan 31 05:12:20 crc kubenswrapper[4832]: I0131 05:12:20.542286 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" podStartSLOduration=2.047492721 podStartE2EDuration="2.542266154s" podCreationTimestamp="2026-01-31 05:12:18 +0000 UTC" firstStartedPulling="2026-01-31 05:12:19.622149277 +0000 UTC m=+1748.570970962" lastFinishedPulling="2026-01-31 05:12:20.11692271 +0000 UTC m=+1749.065744395" observedRunningTime="2026-01-31 05:12:20.540425816 +0000 UTC m=+1749.489247531" watchObservedRunningTime="2026-01-31 05:12:20.542266154 +0000 UTC m=+1749.491087859" Jan 31 05:12:25 crc kubenswrapper[4832]: I0131 05:12:25.594335 4832 generic.go:334] "Generic (PLEG): container finished" podID="945ad601-23f1-4494-a2a8-6bf53b841d2f" containerID="187f34304e072c6798b61436ad560ec1930b5c5b99207dcb98c1bc93139fb215" exitCode=0 Jan 31 05:12:25 crc kubenswrapper[4832]: I0131 05:12:25.594447 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" event={"ID":"945ad601-23f1-4494-a2a8-6bf53b841d2f","Type":"ContainerDied","Data":"187f34304e072c6798b61436ad560ec1930b5c5b99207dcb98c1bc93139fb215"} Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.048084 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.169270 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/945ad601-23f1-4494-a2a8-6bf53b841d2f-inventory\") pod \"945ad601-23f1-4494-a2a8-6bf53b841d2f\" (UID: \"945ad601-23f1-4494-a2a8-6bf53b841d2f\") " Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.169674 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsmkw\" (UniqueName: \"kubernetes.io/projected/945ad601-23f1-4494-a2a8-6bf53b841d2f-kube-api-access-nsmkw\") pod \"945ad601-23f1-4494-a2a8-6bf53b841d2f\" (UID: \"945ad601-23f1-4494-a2a8-6bf53b841d2f\") " Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.169887 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/945ad601-23f1-4494-a2a8-6bf53b841d2f-ssh-key-openstack-edpm-ipam\") pod \"945ad601-23f1-4494-a2a8-6bf53b841d2f\" (UID: \"945ad601-23f1-4494-a2a8-6bf53b841d2f\") " Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.175773 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945ad601-23f1-4494-a2a8-6bf53b841d2f-kube-api-access-nsmkw" (OuterVolumeSpecName: "kube-api-access-nsmkw") pod "945ad601-23f1-4494-a2a8-6bf53b841d2f" (UID: "945ad601-23f1-4494-a2a8-6bf53b841d2f"). InnerVolumeSpecName "kube-api-access-nsmkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.204482 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945ad601-23f1-4494-a2a8-6bf53b841d2f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "945ad601-23f1-4494-a2a8-6bf53b841d2f" (UID: "945ad601-23f1-4494-a2a8-6bf53b841d2f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.208300 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945ad601-23f1-4494-a2a8-6bf53b841d2f-inventory" (OuterVolumeSpecName: "inventory") pod "945ad601-23f1-4494-a2a8-6bf53b841d2f" (UID: "945ad601-23f1-4494-a2a8-6bf53b841d2f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.272440 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/945ad601-23f1-4494-a2a8-6bf53b841d2f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.272477 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/945ad601-23f1-4494-a2a8-6bf53b841d2f-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.272491 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsmkw\" (UniqueName: \"kubernetes.io/projected/945ad601-23f1-4494-a2a8-6bf53b841d2f-kube-api-access-nsmkw\") on node \"crc\" DevicePath \"\"" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.616299 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" event={"ID":"945ad601-23f1-4494-a2a8-6bf53b841d2f","Type":"ContainerDied","Data":"ab812085322d9c73b09fd7c5e9ac83a47b6a104e63a115f5c311e983209f6884"} Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.616344 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab812085322d9c73b09fd7c5e9ac83a47b6a104e63a115f5c311e983209f6884" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.616373 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.700111 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66"] Jan 31 05:12:27 crc kubenswrapper[4832]: E0131 05:12:27.700662 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945ad601-23f1-4494-a2a8-6bf53b841d2f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.700687 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="945ad601-23f1-4494-a2a8-6bf53b841d2f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.700909 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="945ad601-23f1-4494-a2a8-6bf53b841d2f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.701990 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.704259 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.704831 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.705180 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.708781 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.710468 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66"] Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.883234 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvcfj\" (UniqueName: \"kubernetes.io/projected/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-kube-api-access-rvcfj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dgr66\" (UID: \"23b0f31e-31d6-4f12-91d5-fe078d89dfb7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.883625 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dgr66\" (UID: \"23b0f31e-31d6-4f12-91d5-fe078d89dfb7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.883721 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dgr66\" (UID: \"23b0f31e-31d6-4f12-91d5-fe078d89dfb7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.985409 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvcfj\" (UniqueName: \"kubernetes.io/projected/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-kube-api-access-rvcfj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dgr66\" (UID: \"23b0f31e-31d6-4f12-91d5-fe078d89dfb7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.985535 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dgr66\" (UID: \"23b0f31e-31d6-4f12-91d5-fe078d89dfb7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.985600 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dgr66\" (UID: \"23b0f31e-31d6-4f12-91d5-fe078d89dfb7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.992046 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dgr66\" (UID: \"23b0f31e-31d6-4f12-91d5-fe078d89dfb7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" Jan 31 05:12:27 crc kubenswrapper[4832]: I0131 05:12:27.992174 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dgr66\" (UID: \"23b0f31e-31d6-4f12-91d5-fe078d89dfb7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" Jan 31 05:12:28 crc kubenswrapper[4832]: I0131 05:12:28.020324 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvcfj\" (UniqueName: \"kubernetes.io/projected/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-kube-api-access-rvcfj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dgr66\" (UID: \"23b0f31e-31d6-4f12-91d5-fe078d89dfb7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" Jan 31 05:12:28 crc kubenswrapper[4832]: I0131 05:12:28.072717 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" Jan 31 05:12:28 crc kubenswrapper[4832]: W0131 05:12:28.742794 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23b0f31e_31d6_4f12_91d5_fe078d89dfb7.slice/crio-dff142058da8e6828b454e3e303b5d8aa6221d670d15f0004f006746acfd8c01 WatchSource:0}: Error finding container dff142058da8e6828b454e3e303b5d8aa6221d670d15f0004f006746acfd8c01: Status 404 returned error can't find the container with id dff142058da8e6828b454e3e303b5d8aa6221d670d15f0004f006746acfd8c01 Jan 31 05:12:28 crc kubenswrapper[4832]: I0131 05:12:28.746069 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66"] Jan 31 05:12:29 crc kubenswrapper[4832]: I0131 05:12:29.645621 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" event={"ID":"23b0f31e-31d6-4f12-91d5-fe078d89dfb7","Type":"ContainerStarted","Data":"74b9c710fa693e73ecac9b36d8c505e6ee1d726efda7497dbf65f3d4ebb9ad73"} Jan 31 05:12:29 crc kubenswrapper[4832]: I0131 05:12:29.647176 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" event={"ID":"23b0f31e-31d6-4f12-91d5-fe078d89dfb7","Type":"ContainerStarted","Data":"dff142058da8e6828b454e3e303b5d8aa6221d670d15f0004f006746acfd8c01"} Jan 31 05:12:29 crc kubenswrapper[4832]: I0131 05:12:29.672737 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" podStartSLOduration=2.129684459 podStartE2EDuration="2.672709205s" podCreationTimestamp="2026-01-31 05:12:27 +0000 UTC" firstStartedPulling="2026-01-31 05:12:28.745054262 +0000 UTC m=+1757.693875947" lastFinishedPulling="2026-01-31 05:12:29.288079008 +0000 UTC m=+1758.236900693" observedRunningTime="2026-01-31 05:12:29.669266468 +0000 UTC m=+1758.618088173" watchObservedRunningTime="2026-01-31 05:12:29.672709205 +0000 UTC m=+1758.621530930" Jan 31 05:12:30 crc kubenswrapper[4832]: I0131 05:12:30.859627 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:12:30 crc kubenswrapper[4832]: E0131 05:12:30.860790 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:12:42 crc kubenswrapper[4832]: I0131 05:12:42.859639 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:12:42 crc kubenswrapper[4832]: E0131 05:12:42.860706 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:12:50 crc kubenswrapper[4832]: I0131 05:12:50.054284 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-5x7l7"] Jan 31 05:12:50 crc kubenswrapper[4832]: I0131 05:12:50.077341 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-k4pzp"] Jan 31 05:12:50 crc kubenswrapper[4832]: I0131 05:12:50.089593 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-jwzlw"] Jan 31 05:12:50 crc kubenswrapper[4832]: I0131 05:12:50.101604 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7db2-account-create-update-gjdrd"] Jan 31 05:12:50 crc kubenswrapper[4832]: I0131 05:12:50.112026 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cce3-account-create-update-5j49p"] Jan 31 05:12:50 crc kubenswrapper[4832]: I0131 05:12:50.121330 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-5x7l7"] Jan 31 05:12:50 crc kubenswrapper[4832]: I0131 05:12:50.132366 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7db2-account-create-update-gjdrd"] Jan 31 05:12:50 crc kubenswrapper[4832]: I0131 05:12:50.139796 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-k4pzp"] Jan 31 05:12:50 crc kubenswrapper[4832]: I0131 05:12:50.146813 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cce3-account-create-update-5j49p"] Jan 31 05:12:50 crc kubenswrapper[4832]: I0131 05:12:50.153786 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-jwzlw"] Jan 31 05:12:51 crc kubenswrapper[4832]: I0131 05:12:51.054438 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3153-account-create-update-xpnkf"] Jan 31 05:12:51 crc kubenswrapper[4832]: I0131 05:12:51.063977 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3153-account-create-update-xpnkf"] Jan 31 05:12:51 crc kubenswrapper[4832]: I0131 05:12:51.878260 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4901d831-7d96-44f6-afde-ab64894754b5" path="/var/lib/kubelet/pods/4901d831-7d96-44f6-afde-ab64894754b5/volumes" Jan 31 05:12:51 crc kubenswrapper[4832]: I0131 05:12:51.878904 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96a7b311-e7f0-4471-937c-bc6330f4c1c5" path="/var/lib/kubelet/pods/96a7b311-e7f0-4471-937c-bc6330f4c1c5/volumes" Jan 31 05:12:51 crc kubenswrapper[4832]: I0131 05:12:51.879482 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4e367b5-55d2-4e64-9fda-e9b79fe2684f" path="/var/lib/kubelet/pods/a4e367b5-55d2-4e64-9fda-e9b79fe2684f/volumes" Jan 31 05:12:51 crc kubenswrapper[4832]: I0131 05:12:51.880028 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6913a01-865b-47e6-bf86-fe0ecfc7ea42" path="/var/lib/kubelet/pods/a6913a01-865b-47e6-bf86-fe0ecfc7ea42/volumes" Jan 31 05:12:51 crc kubenswrapper[4832]: I0131 05:12:51.881052 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be592430-58f0-43c6-b740-fd413081c5c4" path="/var/lib/kubelet/pods/be592430-58f0-43c6-b740-fd413081c5c4/volumes" Jan 31 05:12:51 crc kubenswrapper[4832]: I0131 05:12:51.881639 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d15a8ae3-e07a-4f6d-9364-a847ec46f620" path="/var/lib/kubelet/pods/d15a8ae3-e07a-4f6d-9364-a847ec46f620/volumes" Jan 31 05:12:53 crc kubenswrapper[4832]: I0131 05:12:53.860774 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:12:53 crc kubenswrapper[4832]: E0131 05:12:53.861539 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:13:04 crc kubenswrapper[4832]: I0131 05:13:04.861682 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:13:04 crc kubenswrapper[4832]: E0131 05:13:04.863466 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:13:08 crc kubenswrapper[4832]: I0131 05:13:08.077466 4832 generic.go:334] "Generic (PLEG): container finished" podID="23b0f31e-31d6-4f12-91d5-fe078d89dfb7" containerID="74b9c710fa693e73ecac9b36d8c505e6ee1d726efda7497dbf65f3d4ebb9ad73" exitCode=0 Jan 31 05:13:08 crc kubenswrapper[4832]: I0131 05:13:08.077517 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" event={"ID":"23b0f31e-31d6-4f12-91d5-fe078d89dfb7","Type":"ContainerDied","Data":"74b9c710fa693e73ecac9b36d8c505e6ee1d726efda7497dbf65f3d4ebb9ad73"} Jan 31 05:13:09 crc kubenswrapper[4832]: I0131 05:13:09.607217 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" Jan 31 05:13:09 crc kubenswrapper[4832]: I0131 05:13:09.708264 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-ssh-key-openstack-edpm-ipam\") pod \"23b0f31e-31d6-4f12-91d5-fe078d89dfb7\" (UID: \"23b0f31e-31d6-4f12-91d5-fe078d89dfb7\") " Jan 31 05:13:09 crc kubenswrapper[4832]: I0131 05:13:09.708434 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-inventory\") pod \"23b0f31e-31d6-4f12-91d5-fe078d89dfb7\" (UID: \"23b0f31e-31d6-4f12-91d5-fe078d89dfb7\") " Jan 31 05:13:09 crc kubenswrapper[4832]: I0131 05:13:09.708550 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvcfj\" (UniqueName: \"kubernetes.io/projected/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-kube-api-access-rvcfj\") pod \"23b0f31e-31d6-4f12-91d5-fe078d89dfb7\" (UID: \"23b0f31e-31d6-4f12-91d5-fe078d89dfb7\") " Jan 31 05:13:09 crc kubenswrapper[4832]: I0131 05:13:09.714315 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-kube-api-access-rvcfj" (OuterVolumeSpecName: "kube-api-access-rvcfj") pod "23b0f31e-31d6-4f12-91d5-fe078d89dfb7" (UID: "23b0f31e-31d6-4f12-91d5-fe078d89dfb7"). InnerVolumeSpecName "kube-api-access-rvcfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:13:09 crc kubenswrapper[4832]: I0131 05:13:09.737594 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-inventory" (OuterVolumeSpecName: "inventory") pod "23b0f31e-31d6-4f12-91d5-fe078d89dfb7" (UID: "23b0f31e-31d6-4f12-91d5-fe078d89dfb7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:13:09 crc kubenswrapper[4832]: I0131 05:13:09.755270 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "23b0f31e-31d6-4f12-91d5-fe078d89dfb7" (UID: "23b0f31e-31d6-4f12-91d5-fe078d89dfb7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:13:09 crc kubenswrapper[4832]: I0131 05:13:09.810819 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvcfj\" (UniqueName: \"kubernetes.io/projected/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-kube-api-access-rvcfj\") on node \"crc\" DevicePath \"\"" Jan 31 05:13:09 crc kubenswrapper[4832]: I0131 05:13:09.810851 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:13:09 crc kubenswrapper[4832]: I0131 05:13:09.810861 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23b0f31e-31d6-4f12-91d5-fe078d89dfb7-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.105707 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" event={"ID":"23b0f31e-31d6-4f12-91d5-fe078d89dfb7","Type":"ContainerDied","Data":"dff142058da8e6828b454e3e303b5d8aa6221d670d15f0004f006746acfd8c01"} Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.105748 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dff142058da8e6828b454e3e303b5d8aa6221d670d15f0004f006746acfd8c01" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.105788 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dgr66" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.211008 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx"] Jan 31 05:13:10 crc kubenswrapper[4832]: E0131 05:13:10.211654 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b0f31e-31d6-4f12-91d5-fe078d89dfb7" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.211685 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b0f31e-31d6-4f12-91d5-fe078d89dfb7" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.211958 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b0f31e-31d6-4f12-91d5-fe078d89dfb7" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.212852 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.220976 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.221258 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.221448 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.221644 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.223645 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx"] Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.322674 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/915d4541-b4f7-4a50-ba36-3ed09a631c87-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx\" (UID: \"915d4541-b4f7-4a50-ba36-3ed09a631c87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.322732 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915d4541-b4f7-4a50-ba36-3ed09a631c87-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx\" (UID: \"915d4541-b4f7-4a50-ba36-3ed09a631c87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.323072 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txjdg\" (UniqueName: \"kubernetes.io/projected/915d4541-b4f7-4a50-ba36-3ed09a631c87-kube-api-access-txjdg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx\" (UID: \"915d4541-b4f7-4a50-ba36-3ed09a631c87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.424844 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txjdg\" (UniqueName: \"kubernetes.io/projected/915d4541-b4f7-4a50-ba36-3ed09a631c87-kube-api-access-txjdg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx\" (UID: \"915d4541-b4f7-4a50-ba36-3ed09a631c87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.425269 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/915d4541-b4f7-4a50-ba36-3ed09a631c87-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx\" (UID: \"915d4541-b4f7-4a50-ba36-3ed09a631c87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.425425 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915d4541-b4f7-4a50-ba36-3ed09a631c87-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx\" (UID: \"915d4541-b4f7-4a50-ba36-3ed09a631c87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.439437 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/915d4541-b4f7-4a50-ba36-3ed09a631c87-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx\" (UID: \"915d4541-b4f7-4a50-ba36-3ed09a631c87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.439531 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915d4541-b4f7-4a50-ba36-3ed09a631c87-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx\" (UID: \"915d4541-b4f7-4a50-ba36-3ed09a631c87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.455711 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txjdg\" (UniqueName: \"kubernetes.io/projected/915d4541-b4f7-4a50-ba36-3ed09a631c87-kube-api-access-txjdg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx\" (UID: \"915d4541-b4f7-4a50-ba36-3ed09a631c87\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" Jan 31 05:13:10 crc kubenswrapper[4832]: I0131 05:13:10.537488 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" Jan 31 05:13:11 crc kubenswrapper[4832]: I0131 05:13:11.134401 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx"] Jan 31 05:13:11 crc kubenswrapper[4832]: W0131 05:13:11.140402 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod915d4541_b4f7_4a50_ba36_3ed09a631c87.slice/crio-3fb4d6f160453fc089f978b5ab144734b0a620b6be88795fb076e5018c29fc01 WatchSource:0}: Error finding container 3fb4d6f160453fc089f978b5ab144734b0a620b6be88795fb076e5018c29fc01: Status 404 returned error can't find the container with id 3fb4d6f160453fc089f978b5ab144734b0a620b6be88795fb076e5018c29fc01 Jan 31 05:13:12 crc kubenswrapper[4832]: I0131 05:13:12.137290 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" event={"ID":"915d4541-b4f7-4a50-ba36-3ed09a631c87","Type":"ContainerStarted","Data":"be4821fecd76d587ec4e0774c18e03e19ec9e58354661b313c2f6de841a9a269"} Jan 31 05:13:12 crc kubenswrapper[4832]: I0131 05:13:12.138053 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" event={"ID":"915d4541-b4f7-4a50-ba36-3ed09a631c87","Type":"ContainerStarted","Data":"3fb4d6f160453fc089f978b5ab144734b0a620b6be88795fb076e5018c29fc01"} Jan 31 05:13:12 crc kubenswrapper[4832]: I0131 05:13:12.162539 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" podStartSLOduration=1.678697289 podStartE2EDuration="2.162512103s" podCreationTimestamp="2026-01-31 05:13:10 +0000 UTC" firstStartedPulling="2026-01-31 05:13:11.145519709 +0000 UTC m=+1800.094341404" lastFinishedPulling="2026-01-31 05:13:11.629334523 +0000 UTC m=+1800.578156218" observedRunningTime="2026-01-31 05:13:12.158912171 +0000 UTC m=+1801.107733896" watchObservedRunningTime="2026-01-31 05:13:12.162512103 +0000 UTC m=+1801.111333818" Jan 31 05:13:15 crc kubenswrapper[4832]: I0131 05:13:15.932638 4832 scope.go:117] "RemoveContainer" containerID="26be16a39c82094bbe783257d7baafd4e46aa9ad9f65ecebdc4baa0a5a19f82e" Jan 31 05:13:15 crc kubenswrapper[4832]: I0131 05:13:15.970757 4832 scope.go:117] "RemoveContainer" containerID="f9195aa1df7c4c3882ff11144129c01f69aa187ef17accbb673d88d0d273f32e" Jan 31 05:13:16 crc kubenswrapper[4832]: I0131 05:13:16.007822 4832 scope.go:117] "RemoveContainer" containerID="622d0e18b365feccb5aff103f23927913295c2b63e39705302e5b6e3f4211add" Jan 31 05:13:16 crc kubenswrapper[4832]: I0131 05:13:16.056867 4832 scope.go:117] "RemoveContainer" containerID="969c559518333a73a6204d199e23eef77f959f6dcf5a46813c2fcc18be6da989" Jan 31 05:13:16 crc kubenswrapper[4832]: I0131 05:13:16.093666 4832 scope.go:117] "RemoveContainer" containerID="f54b20bf046478851aeff7b9746d68e8c7ffd729b6996c9ce97218e17c4ae5b6" Jan 31 05:13:16 crc kubenswrapper[4832]: I0131 05:13:16.151669 4832 scope.go:117] "RemoveContainer" containerID="85b1c8e42122f92e2a6350fb3b7a92d54d2d2edf8463eca5c27de383eb0113a8" Jan 31 05:13:18 crc kubenswrapper[4832]: I0131 05:13:18.859754 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:13:18 crc kubenswrapper[4832]: E0131 05:13:18.860707 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:13:19 crc kubenswrapper[4832]: I0131 05:13:19.048142 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4jdb8"] Jan 31 05:13:19 crc kubenswrapper[4832]: I0131 05:13:19.056699 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4jdb8"] Jan 31 05:13:19 crc kubenswrapper[4832]: I0131 05:13:19.878867 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ec4022-010e-4c03-8e2c-622261e37510" path="/var/lib/kubelet/pods/67ec4022-010e-4c03-8e2c-622261e37510/volumes" Jan 31 05:13:33 crc kubenswrapper[4832]: I0131 05:13:33.859423 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:13:33 crc kubenswrapper[4832]: E0131 05:13:33.860755 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:13:42 crc kubenswrapper[4832]: I0131 05:13:42.055108 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-pgfgj"] Jan 31 05:13:42 crc kubenswrapper[4832]: I0131 05:13:42.064610 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-pgfgj"] Jan 31 05:13:43 crc kubenswrapper[4832]: I0131 05:13:43.035984 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bx446"] Jan 31 05:13:43 crc kubenswrapper[4832]: I0131 05:13:43.047140 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bx446"] Jan 31 05:13:43 crc kubenswrapper[4832]: I0131 05:13:43.870663 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1348b78a-eddf-4b18-b3ab-3aa70968678f" path="/var/lib/kubelet/pods/1348b78a-eddf-4b18-b3ab-3aa70968678f/volumes" Jan 31 05:13:43 crc kubenswrapper[4832]: I0131 05:13:43.871355 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9897d32-fbf9-4266-ae7e-2c9ec76b65c5" path="/var/lib/kubelet/pods/d9897d32-fbf9-4266-ae7e-2c9ec76b65c5/volumes" Jan 31 05:13:45 crc kubenswrapper[4832]: I0131 05:13:45.860533 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:13:45 crc kubenswrapper[4832]: E0131 05:13:45.862493 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:13:59 crc kubenswrapper[4832]: I0131 05:13:59.859952 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:14:00 crc kubenswrapper[4832]: I0131 05:14:00.672246 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"6bedd6a8a1f4f5a4ad06b8307a6d0f1f23ba9781c43cb4e926f0e3a33ef0bb53"} Jan 31 05:14:05 crc kubenswrapper[4832]: I0131 05:14:05.726871 4832 generic.go:334] "Generic (PLEG): container finished" podID="915d4541-b4f7-4a50-ba36-3ed09a631c87" containerID="be4821fecd76d587ec4e0774c18e03e19ec9e58354661b313c2f6de841a9a269" exitCode=0 Jan 31 05:14:05 crc kubenswrapper[4832]: I0131 05:14:05.726978 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" event={"ID":"915d4541-b4f7-4a50-ba36-3ed09a631c87","Type":"ContainerDied","Data":"be4821fecd76d587ec4e0774c18e03e19ec9e58354661b313c2f6de841a9a269"} Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.230867 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.287117 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gvv9f"] Jan 31 05:14:07 crc kubenswrapper[4832]: E0131 05:14:07.287726 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="915d4541-b4f7-4a50-ba36-3ed09a631c87" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.287748 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="915d4541-b4f7-4a50-ba36-3ed09a631c87" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.287954 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="915d4541-b4f7-4a50-ba36-3ed09a631c87" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.291783 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.307054 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gvv9f"] Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.307462 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txjdg\" (UniqueName: \"kubernetes.io/projected/915d4541-b4f7-4a50-ba36-3ed09a631c87-kube-api-access-txjdg\") pod \"915d4541-b4f7-4a50-ba36-3ed09a631c87\" (UID: \"915d4541-b4f7-4a50-ba36-3ed09a631c87\") " Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.307679 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/915d4541-b4f7-4a50-ba36-3ed09a631c87-ssh-key-openstack-edpm-ipam\") pod \"915d4541-b4f7-4a50-ba36-3ed09a631c87\" (UID: \"915d4541-b4f7-4a50-ba36-3ed09a631c87\") " Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.307726 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915d4541-b4f7-4a50-ba36-3ed09a631c87-inventory\") pod \"915d4541-b4f7-4a50-ba36-3ed09a631c87\" (UID: \"915d4541-b4f7-4a50-ba36-3ed09a631c87\") " Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.315221 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/915d4541-b4f7-4a50-ba36-3ed09a631c87-kube-api-access-txjdg" (OuterVolumeSpecName: "kube-api-access-txjdg") pod "915d4541-b4f7-4a50-ba36-3ed09a631c87" (UID: "915d4541-b4f7-4a50-ba36-3ed09a631c87"). InnerVolumeSpecName "kube-api-access-txjdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.348524 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915d4541-b4f7-4a50-ba36-3ed09a631c87-inventory" (OuterVolumeSpecName: "inventory") pod "915d4541-b4f7-4a50-ba36-3ed09a631c87" (UID: "915d4541-b4f7-4a50-ba36-3ed09a631c87"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.377951 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915d4541-b4f7-4a50-ba36-3ed09a631c87-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "915d4541-b4f7-4a50-ba36-3ed09a631c87" (UID: "915d4541-b4f7-4a50-ba36-3ed09a631c87"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.410035 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5671a9de-8bda-4e92-8ca0-c562cd521aef-utilities\") pod \"community-operators-gvv9f\" (UID: \"5671a9de-8bda-4e92-8ca0-c562cd521aef\") " pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.410193 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvrhb\" (UniqueName: \"kubernetes.io/projected/5671a9de-8bda-4e92-8ca0-c562cd521aef-kube-api-access-mvrhb\") pod \"community-operators-gvv9f\" (UID: \"5671a9de-8bda-4e92-8ca0-c562cd521aef\") " pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.410236 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5671a9de-8bda-4e92-8ca0-c562cd521aef-catalog-content\") pod \"community-operators-gvv9f\" (UID: \"5671a9de-8bda-4e92-8ca0-c562cd521aef\") " pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.410453 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/915d4541-b4f7-4a50-ba36-3ed09a631c87-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.410485 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915d4541-b4f7-4a50-ba36-3ed09a631c87-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.410494 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txjdg\" (UniqueName: \"kubernetes.io/projected/915d4541-b4f7-4a50-ba36-3ed09a631c87-kube-api-access-txjdg\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.512753 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5671a9de-8bda-4e92-8ca0-c562cd521aef-utilities\") pod \"community-operators-gvv9f\" (UID: \"5671a9de-8bda-4e92-8ca0-c562cd521aef\") " pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.512898 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvrhb\" (UniqueName: \"kubernetes.io/projected/5671a9de-8bda-4e92-8ca0-c562cd521aef-kube-api-access-mvrhb\") pod \"community-operators-gvv9f\" (UID: \"5671a9de-8bda-4e92-8ca0-c562cd521aef\") " pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.512933 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5671a9de-8bda-4e92-8ca0-c562cd521aef-catalog-content\") pod \"community-operators-gvv9f\" (UID: \"5671a9de-8bda-4e92-8ca0-c562cd521aef\") " pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.513424 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5671a9de-8bda-4e92-8ca0-c562cd521aef-utilities\") pod \"community-operators-gvv9f\" (UID: \"5671a9de-8bda-4e92-8ca0-c562cd521aef\") " pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.513430 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5671a9de-8bda-4e92-8ca0-c562cd521aef-catalog-content\") pod \"community-operators-gvv9f\" (UID: \"5671a9de-8bda-4e92-8ca0-c562cd521aef\") " pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.533699 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvrhb\" (UniqueName: \"kubernetes.io/projected/5671a9de-8bda-4e92-8ca0-c562cd521aef-kube-api-access-mvrhb\") pod \"community-operators-gvv9f\" (UID: \"5671a9de-8bda-4e92-8ca0-c562cd521aef\") " pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.730142 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.747621 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" event={"ID":"915d4541-b4f7-4a50-ba36-3ed09a631c87","Type":"ContainerDied","Data":"3fb4d6f160453fc089f978b5ab144734b0a620b6be88795fb076e5018c29fc01"} Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.747662 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb4d6f160453fc089f978b5ab144734b0a620b6be88795fb076e5018c29fc01" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.747719 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.887397 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gmglt"] Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.888997 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.897888 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.898174 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.898521 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.898733 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.899859 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gmglt"] Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.922773 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/26297c57-667f-414b-912c-2bfa05b73299-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gmglt\" (UID: \"26297c57-667f-414b-912c-2bfa05b73299\") " pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.922885 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jzc6\" (UniqueName: \"kubernetes.io/projected/26297c57-667f-414b-912c-2bfa05b73299-kube-api-access-2jzc6\") pod \"ssh-known-hosts-edpm-deployment-gmglt\" (UID: \"26297c57-667f-414b-912c-2bfa05b73299\") " pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" Jan 31 05:14:07 crc kubenswrapper[4832]: I0131 05:14:07.922931 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26297c57-667f-414b-912c-2bfa05b73299-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gmglt\" (UID: \"26297c57-667f-414b-912c-2bfa05b73299\") " pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" Jan 31 05:14:08 crc kubenswrapper[4832]: I0131 05:14:08.024686 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26297c57-667f-414b-912c-2bfa05b73299-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gmglt\" (UID: \"26297c57-667f-414b-912c-2bfa05b73299\") " pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" Jan 31 05:14:08 crc kubenswrapper[4832]: I0131 05:14:08.024874 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/26297c57-667f-414b-912c-2bfa05b73299-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gmglt\" (UID: \"26297c57-667f-414b-912c-2bfa05b73299\") " pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" Jan 31 05:14:08 crc kubenswrapper[4832]: I0131 05:14:08.024928 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jzc6\" (UniqueName: \"kubernetes.io/projected/26297c57-667f-414b-912c-2bfa05b73299-kube-api-access-2jzc6\") pod \"ssh-known-hosts-edpm-deployment-gmglt\" (UID: \"26297c57-667f-414b-912c-2bfa05b73299\") " pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" Jan 31 05:14:08 crc kubenswrapper[4832]: I0131 05:14:08.032881 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26297c57-667f-414b-912c-2bfa05b73299-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gmglt\" (UID: \"26297c57-667f-414b-912c-2bfa05b73299\") " pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" Jan 31 05:14:08 crc kubenswrapper[4832]: I0131 05:14:08.033166 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/26297c57-667f-414b-912c-2bfa05b73299-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gmglt\" (UID: \"26297c57-667f-414b-912c-2bfa05b73299\") " pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" Jan 31 05:14:08 crc kubenswrapper[4832]: I0131 05:14:08.043259 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jzc6\" (UniqueName: \"kubernetes.io/projected/26297c57-667f-414b-912c-2bfa05b73299-kube-api-access-2jzc6\") pod \"ssh-known-hosts-edpm-deployment-gmglt\" (UID: \"26297c57-667f-414b-912c-2bfa05b73299\") " pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" Jan 31 05:14:08 crc kubenswrapper[4832]: I0131 05:14:08.230893 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" Jan 31 05:14:08 crc kubenswrapper[4832]: I0131 05:14:08.257771 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gvv9f"] Jan 31 05:14:08 crc kubenswrapper[4832]: I0131 05:14:08.767481 4832 generic.go:334] "Generic (PLEG): container finished" podID="5671a9de-8bda-4e92-8ca0-c562cd521aef" containerID="83be56bc7af9567953333b675dae26980ad0c9339d8b945be234f545bd1d35fb" exitCode=0 Jan 31 05:14:08 crc kubenswrapper[4832]: I0131 05:14:08.767528 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvv9f" event={"ID":"5671a9de-8bda-4e92-8ca0-c562cd521aef","Type":"ContainerDied","Data":"83be56bc7af9567953333b675dae26980ad0c9339d8b945be234f545bd1d35fb"} Jan 31 05:14:08 crc kubenswrapper[4832]: I0131 05:14:08.767810 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvv9f" event={"ID":"5671a9de-8bda-4e92-8ca0-c562cd521aef","Type":"ContainerStarted","Data":"1550b09b39deed8c63d940162e9dd88272d1de84fab8817cb7f55fddae3e353e"} Jan 31 05:14:08 crc kubenswrapper[4832]: I0131 05:14:08.792849 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gmglt"] Jan 31 05:14:08 crc kubenswrapper[4832]: W0131 05:14:08.798739 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26297c57_667f_414b_912c_2bfa05b73299.slice/crio-d69a7835eca2cad9616ffa02124085ea0f8cf66b094cb42c2738f9487bed074f WatchSource:0}: Error finding container d69a7835eca2cad9616ffa02124085ea0f8cf66b094cb42c2738f9487bed074f: Status 404 returned error can't find the container with id d69a7835eca2cad9616ffa02124085ea0f8cf66b094cb42c2738f9487bed074f Jan 31 05:14:09 crc kubenswrapper[4832]: I0131 05:14:09.779724 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvv9f" event={"ID":"5671a9de-8bda-4e92-8ca0-c562cd521aef","Type":"ContainerStarted","Data":"876e49ecc38a4a7d58d336ec549c683dd057f554a66806145de48a557b47e769"} Jan 31 05:14:09 crc kubenswrapper[4832]: I0131 05:14:09.783081 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" event={"ID":"26297c57-667f-414b-912c-2bfa05b73299","Type":"ContainerStarted","Data":"41a7bc2828885aa3fb156d6ad3798239bda7eea45c004f3c9501b1d9c5ef33e1"} Jan 31 05:14:09 crc kubenswrapper[4832]: I0131 05:14:09.783130 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" event={"ID":"26297c57-667f-414b-912c-2bfa05b73299","Type":"ContainerStarted","Data":"d69a7835eca2cad9616ffa02124085ea0f8cf66b094cb42c2738f9487bed074f"} Jan 31 05:14:09 crc kubenswrapper[4832]: I0131 05:14:09.832139 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" podStartSLOduration=2.357289942 podStartE2EDuration="2.832118867s" podCreationTimestamp="2026-01-31 05:14:07 +0000 UTC" firstStartedPulling="2026-01-31 05:14:08.802456058 +0000 UTC m=+1857.751277743" lastFinishedPulling="2026-01-31 05:14:09.277284943 +0000 UTC m=+1858.226106668" observedRunningTime="2026-01-31 05:14:09.824936403 +0000 UTC m=+1858.773758108" watchObservedRunningTime="2026-01-31 05:14:09.832118867 +0000 UTC m=+1858.780940552" Jan 31 05:14:10 crc kubenswrapper[4832]: I0131 05:14:10.795544 4832 generic.go:334] "Generic (PLEG): container finished" podID="5671a9de-8bda-4e92-8ca0-c562cd521aef" containerID="876e49ecc38a4a7d58d336ec549c683dd057f554a66806145de48a557b47e769" exitCode=0 Jan 31 05:14:10 crc kubenswrapper[4832]: I0131 05:14:10.795645 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvv9f" event={"ID":"5671a9de-8bda-4e92-8ca0-c562cd521aef","Type":"ContainerDied","Data":"876e49ecc38a4a7d58d336ec549c683dd057f554a66806145de48a557b47e769"} Jan 31 05:14:11 crc kubenswrapper[4832]: I0131 05:14:11.809128 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvv9f" event={"ID":"5671a9de-8bda-4e92-8ca0-c562cd521aef","Type":"ContainerStarted","Data":"1d6f6a31e49e535d36991bf0df4fa9e8d179c6d98fe4fe2fa8420f69763f80a9"} Jan 31 05:14:11 crc kubenswrapper[4832]: I0131 05:14:11.831425 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gvv9f" podStartSLOduration=2.414320416 podStartE2EDuration="4.831401263s" podCreationTimestamp="2026-01-31 05:14:07 +0000 UTC" firstStartedPulling="2026-01-31 05:14:08.769589196 +0000 UTC m=+1857.718410881" lastFinishedPulling="2026-01-31 05:14:11.186670033 +0000 UTC m=+1860.135491728" observedRunningTime="2026-01-31 05:14:11.82968159 +0000 UTC m=+1860.778503295" watchObservedRunningTime="2026-01-31 05:14:11.831401263 +0000 UTC m=+1860.780222958" Jan 31 05:14:16 crc kubenswrapper[4832]: I0131 05:14:16.313896 4832 scope.go:117] "RemoveContainer" containerID="179615d10f1bc54b71d799e1536b72a837c5b802a71ab86f3c9bf1a071c9ecb2" Jan 31 05:14:16 crc kubenswrapper[4832]: I0131 05:14:16.377969 4832 scope.go:117] "RemoveContainer" containerID="a3dcc126a8db5958bac709afb63843f8f51f549b070356980519366b39791051" Jan 31 05:14:16 crc kubenswrapper[4832]: I0131 05:14:16.453148 4832 scope.go:117] "RemoveContainer" containerID="921950b98c373d71bed145fc150ae8ae81dd172c85b0b97c772f65f2f9659991" Jan 31 05:14:16 crc kubenswrapper[4832]: I0131 05:14:16.865643 4832 generic.go:334] "Generic (PLEG): container finished" podID="26297c57-667f-414b-912c-2bfa05b73299" containerID="41a7bc2828885aa3fb156d6ad3798239bda7eea45c004f3c9501b1d9c5ef33e1" exitCode=0 Jan 31 05:14:16 crc kubenswrapper[4832]: I0131 05:14:16.866370 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" event={"ID":"26297c57-667f-414b-912c-2bfa05b73299","Type":"ContainerDied","Data":"41a7bc2828885aa3fb156d6ad3798239bda7eea45c004f3c9501b1d9c5ef33e1"} Jan 31 05:14:17 crc kubenswrapper[4832]: I0131 05:14:17.730540 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:17 crc kubenswrapper[4832]: I0131 05:14:17.731510 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:17 crc kubenswrapper[4832]: I0131 05:14:17.819528 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:17 crc kubenswrapper[4832]: I0131 05:14:17.968315 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:18 crc kubenswrapper[4832]: I0131 05:14:18.072785 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gvv9f"] Jan 31 05:14:18 crc kubenswrapper[4832]: I0131 05:14:18.430238 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" Jan 31 05:14:18 crc kubenswrapper[4832]: I0131 05:14:18.465307 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jzc6\" (UniqueName: \"kubernetes.io/projected/26297c57-667f-414b-912c-2bfa05b73299-kube-api-access-2jzc6\") pod \"26297c57-667f-414b-912c-2bfa05b73299\" (UID: \"26297c57-667f-414b-912c-2bfa05b73299\") " Jan 31 05:14:18 crc kubenswrapper[4832]: I0131 05:14:18.465474 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/26297c57-667f-414b-912c-2bfa05b73299-inventory-0\") pod \"26297c57-667f-414b-912c-2bfa05b73299\" (UID: \"26297c57-667f-414b-912c-2bfa05b73299\") " Jan 31 05:14:18 crc kubenswrapper[4832]: I0131 05:14:18.465674 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26297c57-667f-414b-912c-2bfa05b73299-ssh-key-openstack-edpm-ipam\") pod \"26297c57-667f-414b-912c-2bfa05b73299\" (UID: \"26297c57-667f-414b-912c-2bfa05b73299\") " Jan 31 05:14:18 crc kubenswrapper[4832]: I0131 05:14:18.472805 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26297c57-667f-414b-912c-2bfa05b73299-kube-api-access-2jzc6" (OuterVolumeSpecName: "kube-api-access-2jzc6") pod "26297c57-667f-414b-912c-2bfa05b73299" (UID: "26297c57-667f-414b-912c-2bfa05b73299"). InnerVolumeSpecName "kube-api-access-2jzc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:14:18 crc kubenswrapper[4832]: I0131 05:14:18.492079 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26297c57-667f-414b-912c-2bfa05b73299-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "26297c57-667f-414b-912c-2bfa05b73299" (UID: "26297c57-667f-414b-912c-2bfa05b73299"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:14:18 crc kubenswrapper[4832]: I0131 05:14:18.514625 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26297c57-667f-414b-912c-2bfa05b73299-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "26297c57-667f-414b-912c-2bfa05b73299" (UID: "26297c57-667f-414b-912c-2bfa05b73299"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:14:18 crc kubenswrapper[4832]: I0131 05:14:18.567880 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jzc6\" (UniqueName: \"kubernetes.io/projected/26297c57-667f-414b-912c-2bfa05b73299-kube-api-access-2jzc6\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:18 crc kubenswrapper[4832]: I0131 05:14:18.567914 4832 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/26297c57-667f-414b-912c-2bfa05b73299-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:18 crc kubenswrapper[4832]: I0131 05:14:18.567929 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26297c57-667f-414b-912c-2bfa05b73299-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:18 crc kubenswrapper[4832]: I0131 05:14:18.894633 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" event={"ID":"26297c57-667f-414b-912c-2bfa05b73299","Type":"ContainerDied","Data":"d69a7835eca2cad9616ffa02124085ea0f8cf66b094cb42c2738f9487bed074f"} Jan 31 05:14:18 crc kubenswrapper[4832]: I0131 05:14:18.894721 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d69a7835eca2cad9616ffa02124085ea0f8cf66b094cb42c2738f9487bed074f" Jan 31 05:14:18 crc kubenswrapper[4832]: I0131 05:14:18.894662 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gmglt" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.007944 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g"] Jan 31 05:14:19 crc kubenswrapper[4832]: E0131 05:14:19.008517 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26297c57-667f-414b-912c-2bfa05b73299" containerName="ssh-known-hosts-edpm-deployment" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.008542 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="26297c57-667f-414b-912c-2bfa05b73299" containerName="ssh-known-hosts-edpm-deployment" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.008805 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="26297c57-667f-414b-912c-2bfa05b73299" containerName="ssh-known-hosts-edpm-deployment" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.009824 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.014046 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.014344 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.014691 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.015123 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.019785 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g"] Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.079506 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eac023bd-8a06-4be3-9c44-a29c87e4c44c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zzp8g\" (UID: \"eac023bd-8a06-4be3-9c44-a29c87e4c44c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.079808 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eac023bd-8a06-4be3-9c44-a29c87e4c44c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zzp8g\" (UID: \"eac023bd-8a06-4be3-9c44-a29c87e4c44c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.079947 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85rbk\" (UniqueName: \"kubernetes.io/projected/eac023bd-8a06-4be3-9c44-a29c87e4c44c-kube-api-access-85rbk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zzp8g\" (UID: \"eac023bd-8a06-4be3-9c44-a29c87e4c44c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.183107 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eac023bd-8a06-4be3-9c44-a29c87e4c44c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zzp8g\" (UID: \"eac023bd-8a06-4be3-9c44-a29c87e4c44c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.183693 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85rbk\" (UniqueName: \"kubernetes.io/projected/eac023bd-8a06-4be3-9c44-a29c87e4c44c-kube-api-access-85rbk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zzp8g\" (UID: \"eac023bd-8a06-4be3-9c44-a29c87e4c44c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.184063 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eac023bd-8a06-4be3-9c44-a29c87e4c44c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zzp8g\" (UID: \"eac023bd-8a06-4be3-9c44-a29c87e4c44c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.189178 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eac023bd-8a06-4be3-9c44-a29c87e4c44c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zzp8g\" (UID: \"eac023bd-8a06-4be3-9c44-a29c87e4c44c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.189597 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eac023bd-8a06-4be3-9c44-a29c87e4c44c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zzp8g\" (UID: \"eac023bd-8a06-4be3-9c44-a29c87e4c44c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.208288 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85rbk\" (UniqueName: \"kubernetes.io/projected/eac023bd-8a06-4be3-9c44-a29c87e4c44c-kube-api-access-85rbk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zzp8g\" (UID: \"eac023bd-8a06-4be3-9c44-a29c87e4c44c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.374033 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.902638 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gvv9f" podUID="5671a9de-8bda-4e92-8ca0-c562cd521aef" containerName="registry-server" containerID="cri-o://1d6f6a31e49e535d36991bf0df4fa9e8d179c6d98fe4fe2fa8420f69763f80a9" gracePeriod=2 Jan 31 05:14:19 crc kubenswrapper[4832]: I0131 05:14:19.905022 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g"] Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.502776 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.612918 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5671a9de-8bda-4e92-8ca0-c562cd521aef-utilities\") pod \"5671a9de-8bda-4e92-8ca0-c562cd521aef\" (UID: \"5671a9de-8bda-4e92-8ca0-c562cd521aef\") " Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.613257 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvrhb\" (UniqueName: \"kubernetes.io/projected/5671a9de-8bda-4e92-8ca0-c562cd521aef-kube-api-access-mvrhb\") pod \"5671a9de-8bda-4e92-8ca0-c562cd521aef\" (UID: \"5671a9de-8bda-4e92-8ca0-c562cd521aef\") " Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.613387 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5671a9de-8bda-4e92-8ca0-c562cd521aef-catalog-content\") pod \"5671a9de-8bda-4e92-8ca0-c562cd521aef\" (UID: \"5671a9de-8bda-4e92-8ca0-c562cd521aef\") " Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.614242 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5671a9de-8bda-4e92-8ca0-c562cd521aef-utilities" (OuterVolumeSpecName: "utilities") pod "5671a9de-8bda-4e92-8ca0-c562cd521aef" (UID: "5671a9de-8bda-4e92-8ca0-c562cd521aef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.618184 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5671a9de-8bda-4e92-8ca0-c562cd521aef-kube-api-access-mvrhb" (OuterVolumeSpecName: "kube-api-access-mvrhb") pod "5671a9de-8bda-4e92-8ca0-c562cd521aef" (UID: "5671a9de-8bda-4e92-8ca0-c562cd521aef"). InnerVolumeSpecName "kube-api-access-mvrhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.716668 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5671a9de-8bda-4e92-8ca0-c562cd521aef-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.716713 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvrhb\" (UniqueName: \"kubernetes.io/projected/5671a9de-8bda-4e92-8ca0-c562cd521aef-kube-api-access-mvrhb\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.915546 4832 generic.go:334] "Generic (PLEG): container finished" podID="5671a9de-8bda-4e92-8ca0-c562cd521aef" containerID="1d6f6a31e49e535d36991bf0df4fa9e8d179c6d98fe4fe2fa8420f69763f80a9" exitCode=0 Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.915612 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvv9f" event={"ID":"5671a9de-8bda-4e92-8ca0-c562cd521aef","Type":"ContainerDied","Data":"1d6f6a31e49e535d36991bf0df4fa9e8d179c6d98fe4fe2fa8420f69763f80a9"} Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.915702 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvv9f" event={"ID":"5671a9de-8bda-4e92-8ca0-c562cd521aef","Type":"ContainerDied","Data":"1550b09b39deed8c63d940162e9dd88272d1de84fab8817cb7f55fddae3e353e"} Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.915729 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvv9f" Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.915739 4832 scope.go:117] "RemoveContainer" containerID="1d6f6a31e49e535d36991bf0df4fa9e8d179c6d98fe4fe2fa8420f69763f80a9" Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.918028 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" event={"ID":"eac023bd-8a06-4be3-9c44-a29c87e4c44c","Type":"ContainerStarted","Data":"fd329fac5dd45294e7fd1c79dc530821867aa38be84844ed26a1ac5e36f6dcdb"} Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.953659 4832 scope.go:117] "RemoveContainer" containerID="876e49ecc38a4a7d58d336ec549c683dd057f554a66806145de48a557b47e769" Jan 31 05:14:20 crc kubenswrapper[4832]: I0131 05:14:20.984015 4832 scope.go:117] "RemoveContainer" containerID="83be56bc7af9567953333b675dae26980ad0c9339d8b945be234f545bd1d35fb" Jan 31 05:14:21 crc kubenswrapper[4832]: I0131 05:14:21.013717 4832 scope.go:117] "RemoveContainer" containerID="1d6f6a31e49e535d36991bf0df4fa9e8d179c6d98fe4fe2fa8420f69763f80a9" Jan 31 05:14:21 crc kubenswrapper[4832]: E0131 05:14:21.014283 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d6f6a31e49e535d36991bf0df4fa9e8d179c6d98fe4fe2fa8420f69763f80a9\": container with ID starting with 1d6f6a31e49e535d36991bf0df4fa9e8d179c6d98fe4fe2fa8420f69763f80a9 not found: ID does not exist" containerID="1d6f6a31e49e535d36991bf0df4fa9e8d179c6d98fe4fe2fa8420f69763f80a9" Jan 31 05:14:21 crc kubenswrapper[4832]: I0131 05:14:21.014360 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d6f6a31e49e535d36991bf0df4fa9e8d179c6d98fe4fe2fa8420f69763f80a9"} err="failed to get container status \"1d6f6a31e49e535d36991bf0df4fa9e8d179c6d98fe4fe2fa8420f69763f80a9\": rpc error: code = NotFound desc = could not find container \"1d6f6a31e49e535d36991bf0df4fa9e8d179c6d98fe4fe2fa8420f69763f80a9\": container with ID starting with 1d6f6a31e49e535d36991bf0df4fa9e8d179c6d98fe4fe2fa8420f69763f80a9 not found: ID does not exist" Jan 31 05:14:21 crc kubenswrapper[4832]: I0131 05:14:21.014416 4832 scope.go:117] "RemoveContainer" containerID="876e49ecc38a4a7d58d336ec549c683dd057f554a66806145de48a557b47e769" Jan 31 05:14:21 crc kubenswrapper[4832]: E0131 05:14:21.015008 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"876e49ecc38a4a7d58d336ec549c683dd057f554a66806145de48a557b47e769\": container with ID starting with 876e49ecc38a4a7d58d336ec549c683dd057f554a66806145de48a557b47e769 not found: ID does not exist" containerID="876e49ecc38a4a7d58d336ec549c683dd057f554a66806145de48a557b47e769" Jan 31 05:14:21 crc kubenswrapper[4832]: I0131 05:14:21.015087 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"876e49ecc38a4a7d58d336ec549c683dd057f554a66806145de48a557b47e769"} err="failed to get container status \"876e49ecc38a4a7d58d336ec549c683dd057f554a66806145de48a557b47e769\": rpc error: code = NotFound desc = could not find container \"876e49ecc38a4a7d58d336ec549c683dd057f554a66806145de48a557b47e769\": container with ID starting with 876e49ecc38a4a7d58d336ec549c683dd057f554a66806145de48a557b47e769 not found: ID does not exist" Jan 31 05:14:21 crc kubenswrapper[4832]: I0131 05:14:21.015153 4832 scope.go:117] "RemoveContainer" containerID="83be56bc7af9567953333b675dae26980ad0c9339d8b945be234f545bd1d35fb" Jan 31 05:14:21 crc kubenswrapper[4832]: E0131 05:14:21.015651 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83be56bc7af9567953333b675dae26980ad0c9339d8b945be234f545bd1d35fb\": container with ID starting with 83be56bc7af9567953333b675dae26980ad0c9339d8b945be234f545bd1d35fb not found: ID does not exist" containerID="83be56bc7af9567953333b675dae26980ad0c9339d8b945be234f545bd1d35fb" Jan 31 05:14:21 crc kubenswrapper[4832]: I0131 05:14:21.015706 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83be56bc7af9567953333b675dae26980ad0c9339d8b945be234f545bd1d35fb"} err="failed to get container status \"83be56bc7af9567953333b675dae26980ad0c9339d8b945be234f545bd1d35fb\": rpc error: code = NotFound desc = could not find container \"83be56bc7af9567953333b675dae26980ad0c9339d8b945be234f545bd1d35fb\": container with ID starting with 83be56bc7af9567953333b675dae26980ad0c9339d8b945be234f545bd1d35fb not found: ID does not exist" Jan 31 05:14:21 crc kubenswrapper[4832]: I0131 05:14:21.231512 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5671a9de-8bda-4e92-8ca0-c562cd521aef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5671a9de-8bda-4e92-8ca0-c562cd521aef" (UID: "5671a9de-8bda-4e92-8ca0-c562cd521aef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:14:21 crc kubenswrapper[4832]: I0131 05:14:21.232169 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5671a9de-8bda-4e92-8ca0-c562cd521aef-catalog-content\") pod \"5671a9de-8bda-4e92-8ca0-c562cd521aef\" (UID: \"5671a9de-8bda-4e92-8ca0-c562cd521aef\") " Jan 31 05:14:21 crc kubenswrapper[4832]: W0131 05:14:21.232386 4832 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5671a9de-8bda-4e92-8ca0-c562cd521aef/volumes/kubernetes.io~empty-dir/catalog-content Jan 31 05:14:21 crc kubenswrapper[4832]: I0131 05:14:21.232421 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5671a9de-8bda-4e92-8ca0-c562cd521aef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5671a9de-8bda-4e92-8ca0-c562cd521aef" (UID: "5671a9de-8bda-4e92-8ca0-c562cd521aef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:14:21 crc kubenswrapper[4832]: I0131 05:14:21.233382 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5671a9de-8bda-4e92-8ca0-c562cd521aef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:21 crc kubenswrapper[4832]: I0131 05:14:21.567854 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gvv9f"] Jan 31 05:14:21 crc kubenswrapper[4832]: I0131 05:14:21.581003 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gvv9f"] Jan 31 05:14:21 crc kubenswrapper[4832]: I0131 05:14:21.905100 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5671a9de-8bda-4e92-8ca0-c562cd521aef" path="/var/lib/kubelet/pods/5671a9de-8bda-4e92-8ca0-c562cd521aef/volumes" Jan 31 05:14:21 crc kubenswrapper[4832]: I0131 05:14:21.939502 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" event={"ID":"eac023bd-8a06-4be3-9c44-a29c87e4c44c","Type":"ContainerStarted","Data":"74586b0f88492d3b75c34c656cd206f67f69784a08d62725cfc6c3e36cc01b42"} Jan 31 05:14:21 crc kubenswrapper[4832]: I0131 05:14:21.960322 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" podStartSLOduration=3.311058271 podStartE2EDuration="3.960293931s" podCreationTimestamp="2026-01-31 05:14:18 +0000 UTC" firstStartedPulling="2026-01-31 05:14:19.911601047 +0000 UTC m=+1868.860422732" lastFinishedPulling="2026-01-31 05:14:20.560836717 +0000 UTC m=+1869.509658392" observedRunningTime="2026-01-31 05:14:21.953526721 +0000 UTC m=+1870.902348446" watchObservedRunningTime="2026-01-31 05:14:21.960293931 +0000 UTC m=+1870.909115636" Jan 31 05:14:27 crc kubenswrapper[4832]: I0131 05:14:27.055809 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-698zs"] Jan 31 05:14:27 crc kubenswrapper[4832]: I0131 05:14:27.065116 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-698zs"] Jan 31 05:14:27 crc kubenswrapper[4832]: I0131 05:14:27.878231 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1" path="/var/lib/kubelet/pods/c0fe01e3-b6e8-4c88-99fa-9bf2b647c4c1/volumes" Jan 31 05:14:30 crc kubenswrapper[4832]: I0131 05:14:30.015251 4832 generic.go:334] "Generic (PLEG): container finished" podID="eac023bd-8a06-4be3-9c44-a29c87e4c44c" containerID="74586b0f88492d3b75c34c656cd206f67f69784a08d62725cfc6c3e36cc01b42" exitCode=0 Jan 31 05:14:30 crc kubenswrapper[4832]: I0131 05:14:30.015392 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" event={"ID":"eac023bd-8a06-4be3-9c44-a29c87e4c44c","Type":"ContainerDied","Data":"74586b0f88492d3b75c34c656cd206f67f69784a08d62725cfc6c3e36cc01b42"} Jan 31 05:14:31 crc kubenswrapper[4832]: I0131 05:14:31.531846 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" Jan 31 05:14:31 crc kubenswrapper[4832]: I0131 05:14:31.684836 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eac023bd-8a06-4be3-9c44-a29c87e4c44c-inventory\") pod \"eac023bd-8a06-4be3-9c44-a29c87e4c44c\" (UID: \"eac023bd-8a06-4be3-9c44-a29c87e4c44c\") " Jan 31 05:14:31 crc kubenswrapper[4832]: I0131 05:14:31.684896 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eac023bd-8a06-4be3-9c44-a29c87e4c44c-ssh-key-openstack-edpm-ipam\") pod \"eac023bd-8a06-4be3-9c44-a29c87e4c44c\" (UID: \"eac023bd-8a06-4be3-9c44-a29c87e4c44c\") " Jan 31 05:14:31 crc kubenswrapper[4832]: I0131 05:14:31.685036 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85rbk\" (UniqueName: \"kubernetes.io/projected/eac023bd-8a06-4be3-9c44-a29c87e4c44c-kube-api-access-85rbk\") pod \"eac023bd-8a06-4be3-9c44-a29c87e4c44c\" (UID: \"eac023bd-8a06-4be3-9c44-a29c87e4c44c\") " Jan 31 05:14:31 crc kubenswrapper[4832]: I0131 05:14:31.690682 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eac023bd-8a06-4be3-9c44-a29c87e4c44c-kube-api-access-85rbk" (OuterVolumeSpecName: "kube-api-access-85rbk") pod "eac023bd-8a06-4be3-9c44-a29c87e4c44c" (UID: "eac023bd-8a06-4be3-9c44-a29c87e4c44c"). InnerVolumeSpecName "kube-api-access-85rbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:14:31 crc kubenswrapper[4832]: I0131 05:14:31.718741 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eac023bd-8a06-4be3-9c44-a29c87e4c44c-inventory" (OuterVolumeSpecName: "inventory") pod "eac023bd-8a06-4be3-9c44-a29c87e4c44c" (UID: "eac023bd-8a06-4be3-9c44-a29c87e4c44c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:14:31 crc kubenswrapper[4832]: I0131 05:14:31.723959 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eac023bd-8a06-4be3-9c44-a29c87e4c44c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eac023bd-8a06-4be3-9c44-a29c87e4c44c" (UID: "eac023bd-8a06-4be3-9c44-a29c87e4c44c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:14:31 crc kubenswrapper[4832]: I0131 05:14:31.788258 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85rbk\" (UniqueName: \"kubernetes.io/projected/eac023bd-8a06-4be3-9c44-a29c87e4c44c-kube-api-access-85rbk\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:31 crc kubenswrapper[4832]: I0131 05:14:31.788315 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eac023bd-8a06-4be3-9c44-a29c87e4c44c-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:31 crc kubenswrapper[4832]: I0131 05:14:31.788335 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eac023bd-8a06-4be3-9c44-a29c87e4c44c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.049539 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" event={"ID":"eac023bd-8a06-4be3-9c44-a29c87e4c44c","Type":"ContainerDied","Data":"fd329fac5dd45294e7fd1c79dc530821867aa38be84844ed26a1ac5e36f6dcdb"} Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.049625 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd329fac5dd45294e7fd1c79dc530821867aa38be84844ed26a1ac5e36f6dcdb" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.049750 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zzp8g" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.146332 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq"] Jan 31 05:14:32 crc kubenswrapper[4832]: E0131 05:14:32.147055 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac023bd-8a06-4be3-9c44-a29c87e4c44c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.147082 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac023bd-8a06-4be3-9c44-a29c87e4c44c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:14:32 crc kubenswrapper[4832]: E0131 05:14:32.147140 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5671a9de-8bda-4e92-8ca0-c562cd521aef" containerName="registry-server" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.147153 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5671a9de-8bda-4e92-8ca0-c562cd521aef" containerName="registry-server" Jan 31 05:14:32 crc kubenswrapper[4832]: E0131 05:14:32.147171 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5671a9de-8bda-4e92-8ca0-c562cd521aef" containerName="extract-content" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.147182 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5671a9de-8bda-4e92-8ca0-c562cd521aef" containerName="extract-content" Jan 31 05:14:32 crc kubenswrapper[4832]: E0131 05:14:32.147262 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5671a9de-8bda-4e92-8ca0-c562cd521aef" containerName="extract-utilities" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.147271 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5671a9de-8bda-4e92-8ca0-c562cd521aef" containerName="extract-utilities" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.147519 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5671a9de-8bda-4e92-8ca0-c562cd521aef" containerName="registry-server" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.147573 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="eac023bd-8a06-4be3-9c44-a29c87e4c44c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.148790 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.154399 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.154670 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.154720 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.154917 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.162835 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq"] Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.199882 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cafe239e-692a-4f8c-baf3-94b454ed706d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq\" (UID: \"cafe239e-692a-4f8c-baf3-94b454ed706d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.199934 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb2wx\" (UniqueName: \"kubernetes.io/projected/cafe239e-692a-4f8c-baf3-94b454ed706d-kube-api-access-xb2wx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq\" (UID: \"cafe239e-692a-4f8c-baf3-94b454ed706d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.200047 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cafe239e-692a-4f8c-baf3-94b454ed706d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq\" (UID: \"cafe239e-692a-4f8c-baf3-94b454ed706d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.302761 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cafe239e-692a-4f8c-baf3-94b454ed706d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq\" (UID: \"cafe239e-692a-4f8c-baf3-94b454ed706d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.303226 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cafe239e-692a-4f8c-baf3-94b454ed706d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq\" (UID: \"cafe239e-692a-4f8c-baf3-94b454ed706d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.303251 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb2wx\" (UniqueName: \"kubernetes.io/projected/cafe239e-692a-4f8c-baf3-94b454ed706d-kube-api-access-xb2wx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq\" (UID: \"cafe239e-692a-4f8c-baf3-94b454ed706d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.309156 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cafe239e-692a-4f8c-baf3-94b454ed706d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq\" (UID: \"cafe239e-692a-4f8c-baf3-94b454ed706d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.313036 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cafe239e-692a-4f8c-baf3-94b454ed706d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq\" (UID: \"cafe239e-692a-4f8c-baf3-94b454ed706d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.322067 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb2wx\" (UniqueName: \"kubernetes.io/projected/cafe239e-692a-4f8c-baf3-94b454ed706d-kube-api-access-xb2wx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq\" (UID: \"cafe239e-692a-4f8c-baf3-94b454ed706d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" Jan 31 05:14:32 crc kubenswrapper[4832]: I0131 05:14:32.475395 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" Jan 31 05:14:33 crc kubenswrapper[4832]: I0131 05:14:33.173283 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq"] Jan 31 05:14:34 crc kubenswrapper[4832]: I0131 05:14:34.077944 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" event={"ID":"cafe239e-692a-4f8c-baf3-94b454ed706d","Type":"ContainerStarted","Data":"31624c73eb254513a794502024af60f77fa881a072072e0bcdfb98f7eab25ef2"} Jan 31 05:14:34 crc kubenswrapper[4832]: I0131 05:14:34.078675 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" event={"ID":"cafe239e-692a-4f8c-baf3-94b454ed706d","Type":"ContainerStarted","Data":"574671c39b0a0cf2e950a84cdd6a933cf49f2d07165d7b288479d59ce6e8299a"} Jan 31 05:14:34 crc kubenswrapper[4832]: I0131 05:14:34.116530 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" podStartSLOduration=1.691203728 podStartE2EDuration="2.116502592s" podCreationTimestamp="2026-01-31 05:14:32 +0000 UTC" firstStartedPulling="2026-01-31 05:14:33.181458246 +0000 UTC m=+1882.130279931" lastFinishedPulling="2026-01-31 05:14:33.60675707 +0000 UTC m=+1882.555578795" observedRunningTime="2026-01-31 05:14:34.109105582 +0000 UTC m=+1883.057927297" watchObservedRunningTime="2026-01-31 05:14:34.116502592 +0000 UTC m=+1883.065324317" Jan 31 05:14:44 crc kubenswrapper[4832]: I0131 05:14:44.177787 4832 generic.go:334] "Generic (PLEG): container finished" podID="cafe239e-692a-4f8c-baf3-94b454ed706d" containerID="31624c73eb254513a794502024af60f77fa881a072072e0bcdfb98f7eab25ef2" exitCode=0 Jan 31 05:14:44 crc kubenswrapper[4832]: I0131 05:14:44.178007 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" event={"ID":"cafe239e-692a-4f8c-baf3-94b454ed706d","Type":"ContainerDied","Data":"31624c73eb254513a794502024af60f77fa881a072072e0bcdfb98f7eab25ef2"} Jan 31 05:14:45 crc kubenswrapper[4832]: I0131 05:14:45.685712 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" Jan 31 05:14:45 crc kubenswrapper[4832]: I0131 05:14:45.717267 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb2wx\" (UniqueName: \"kubernetes.io/projected/cafe239e-692a-4f8c-baf3-94b454ed706d-kube-api-access-xb2wx\") pod \"cafe239e-692a-4f8c-baf3-94b454ed706d\" (UID: \"cafe239e-692a-4f8c-baf3-94b454ed706d\") " Jan 31 05:14:45 crc kubenswrapper[4832]: I0131 05:14:45.717432 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cafe239e-692a-4f8c-baf3-94b454ed706d-inventory\") pod \"cafe239e-692a-4f8c-baf3-94b454ed706d\" (UID: \"cafe239e-692a-4f8c-baf3-94b454ed706d\") " Jan 31 05:14:45 crc kubenswrapper[4832]: I0131 05:14:45.717501 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cafe239e-692a-4f8c-baf3-94b454ed706d-ssh-key-openstack-edpm-ipam\") pod \"cafe239e-692a-4f8c-baf3-94b454ed706d\" (UID: \"cafe239e-692a-4f8c-baf3-94b454ed706d\") " Jan 31 05:14:45 crc kubenswrapper[4832]: I0131 05:14:45.730148 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cafe239e-692a-4f8c-baf3-94b454ed706d-kube-api-access-xb2wx" (OuterVolumeSpecName: "kube-api-access-xb2wx") pod "cafe239e-692a-4f8c-baf3-94b454ed706d" (UID: "cafe239e-692a-4f8c-baf3-94b454ed706d"). InnerVolumeSpecName "kube-api-access-xb2wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:14:45 crc kubenswrapper[4832]: I0131 05:14:45.752985 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cafe239e-692a-4f8c-baf3-94b454ed706d-inventory" (OuterVolumeSpecName: "inventory") pod "cafe239e-692a-4f8c-baf3-94b454ed706d" (UID: "cafe239e-692a-4f8c-baf3-94b454ed706d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:14:45 crc kubenswrapper[4832]: I0131 05:14:45.780686 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cafe239e-692a-4f8c-baf3-94b454ed706d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cafe239e-692a-4f8c-baf3-94b454ed706d" (UID: "cafe239e-692a-4f8c-baf3-94b454ed706d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:14:45 crc kubenswrapper[4832]: I0131 05:14:45.819803 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cafe239e-692a-4f8c-baf3-94b454ed706d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:45 crc kubenswrapper[4832]: I0131 05:14:45.819921 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb2wx\" (UniqueName: \"kubernetes.io/projected/cafe239e-692a-4f8c-baf3-94b454ed706d-kube-api-access-xb2wx\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:45 crc kubenswrapper[4832]: I0131 05:14:45.819931 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cafe239e-692a-4f8c-baf3-94b454ed706d-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.202460 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" event={"ID":"cafe239e-692a-4f8c-baf3-94b454ed706d","Type":"ContainerDied","Data":"574671c39b0a0cf2e950a84cdd6a933cf49f2d07165d7b288479d59ce6e8299a"} Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.202514 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="574671c39b0a0cf2e950a84cdd6a933cf49f2d07165d7b288479d59ce6e8299a" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.202581 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.331663 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88"] Jan 31 05:14:46 crc kubenswrapper[4832]: E0131 05:14:46.332345 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafe239e-692a-4f8c-baf3-94b454ed706d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.332364 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafe239e-692a-4f8c-baf3-94b454ed706d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.332582 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cafe239e-692a-4f8c-baf3-94b454ed706d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.333292 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.336037 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.336718 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.336964 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.337281 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.337453 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.337586 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.339399 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.339929 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.358755 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88"] Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.435244 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.435322 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.435353 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.435531 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.435713 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.435805 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.436026 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.436230 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.436355 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gss2\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-kube-api-access-9gss2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.436491 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.436581 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.436699 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.436832 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.436960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.538542 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.538622 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.538672 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.538729 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.538771 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.538849 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.538918 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.538964 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gss2\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-kube-api-access-9gss2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.539013 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.539038 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.539071 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.539120 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.539228 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.539401 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.544525 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.545797 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.546350 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.546355 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.547099 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.547924 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.547950 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.548685 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.549457 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.551430 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.551637 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.556422 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.556997 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.563005 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gss2\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-kube-api-access-9gss2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mzk88\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:46 crc kubenswrapper[4832]: I0131 05:14:46.656962 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:14:47 crc kubenswrapper[4832]: I0131 05:14:47.216769 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88"] Jan 31 05:14:48 crc kubenswrapper[4832]: I0131 05:14:48.232536 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" event={"ID":"8ab8bc58-9ae3-4284-b959-164da6ebee5e","Type":"ContainerStarted","Data":"26f46c592c95d7ef390d87619eb44755c686739c04cf14547905e9f19a9407ee"} Jan 31 05:14:48 crc kubenswrapper[4832]: I0131 05:14:48.233280 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" event={"ID":"8ab8bc58-9ae3-4284-b959-164da6ebee5e","Type":"ContainerStarted","Data":"fb42fa25b5c3a7264e951a5f15864f91b0bc65727658fb5af47a8d0b7701abf0"} Jan 31 05:14:48 crc kubenswrapper[4832]: I0131 05:14:48.257078 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" podStartSLOduration=1.790937028 podStartE2EDuration="2.257059312s" podCreationTimestamp="2026-01-31 05:14:46 +0000 UTC" firstStartedPulling="2026-01-31 05:14:47.231655555 +0000 UTC m=+1896.180477270" lastFinishedPulling="2026-01-31 05:14:47.697777869 +0000 UTC m=+1896.646599554" observedRunningTime="2026-01-31 05:14:48.250324033 +0000 UTC m=+1897.199145738" watchObservedRunningTime="2026-01-31 05:14:48.257059312 +0000 UTC m=+1897.205880997" Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.135238 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns"] Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.138361 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.141772 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.145366 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns"] Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.145997 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.236448 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-config-volume\") pod \"collect-profiles-29497275-8p5ns\" (UID: \"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.236593 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-secret-volume\") pod \"collect-profiles-29497275-8p5ns\" (UID: \"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.236630 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-774mg\" (UniqueName: \"kubernetes.io/projected/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-kube-api-access-774mg\") pod \"collect-profiles-29497275-8p5ns\" (UID: \"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.338699 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-secret-volume\") pod \"collect-profiles-29497275-8p5ns\" (UID: \"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.338770 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-774mg\" (UniqueName: \"kubernetes.io/projected/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-kube-api-access-774mg\") pod \"collect-profiles-29497275-8p5ns\" (UID: \"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.338890 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-config-volume\") pod \"collect-profiles-29497275-8p5ns\" (UID: \"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.340028 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-config-volume\") pod \"collect-profiles-29497275-8p5ns\" (UID: \"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.345519 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-secret-volume\") pod \"collect-profiles-29497275-8p5ns\" (UID: \"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.361785 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-774mg\" (UniqueName: \"kubernetes.io/projected/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-kube-api-access-774mg\") pod \"collect-profiles-29497275-8p5ns\" (UID: \"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.463324 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" Jan 31 05:15:00 crc kubenswrapper[4832]: I0131 05:15:00.887024 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns"] Jan 31 05:15:01 crc kubenswrapper[4832]: I0131 05:15:01.359342 4832 generic.go:334] "Generic (PLEG): container finished" podID="362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d" containerID="55066a67b4030e55c5f84c4855b5854432369fd5782acd73f282588a16268c7f" exitCode=0 Jan 31 05:15:01 crc kubenswrapper[4832]: I0131 05:15:01.359450 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" event={"ID":"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d","Type":"ContainerDied","Data":"55066a67b4030e55c5f84c4855b5854432369fd5782acd73f282588a16268c7f"} Jan 31 05:15:01 crc kubenswrapper[4832]: I0131 05:15:01.359669 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" event={"ID":"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d","Type":"ContainerStarted","Data":"a8b95b4bf0d467cbe73b768b31b0846d9994f0ba43fa10ffe348e35c857b4f69"} Jan 31 05:15:02 crc kubenswrapper[4832]: I0131 05:15:02.745616 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" Jan 31 05:15:02 crc kubenswrapper[4832]: I0131 05:15:02.885198 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-config-volume\") pod \"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d\" (UID: \"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d\") " Jan 31 05:15:02 crc kubenswrapper[4832]: I0131 05:15:02.885325 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-774mg\" (UniqueName: \"kubernetes.io/projected/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-kube-api-access-774mg\") pod \"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d\" (UID: \"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d\") " Jan 31 05:15:02 crc kubenswrapper[4832]: I0131 05:15:02.885387 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-secret-volume\") pod \"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d\" (UID: \"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d\") " Jan 31 05:15:02 crc kubenswrapper[4832]: I0131 05:15:02.886188 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-config-volume" (OuterVolumeSpecName: "config-volume") pod "362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d" (UID: "362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:15:02 crc kubenswrapper[4832]: I0131 05:15:02.892003 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-kube-api-access-774mg" (OuterVolumeSpecName: "kube-api-access-774mg") pod "362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d" (UID: "362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d"). InnerVolumeSpecName "kube-api-access-774mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:15:02 crc kubenswrapper[4832]: I0131 05:15:02.892074 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d" (UID: "362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:15:02 crc kubenswrapper[4832]: I0131 05:15:02.987865 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:02 crc kubenswrapper[4832]: I0131 05:15:02.987899 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-774mg\" (UniqueName: \"kubernetes.io/projected/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-kube-api-access-774mg\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:02 crc kubenswrapper[4832]: I0131 05:15:02.987914 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:03 crc kubenswrapper[4832]: I0131 05:15:03.383775 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" event={"ID":"362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d","Type":"ContainerDied","Data":"a8b95b4bf0d467cbe73b768b31b0846d9994f0ba43fa10ffe348e35c857b4f69"} Jan 31 05:15:03 crc kubenswrapper[4832]: I0131 05:15:03.384131 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8b95b4bf0d467cbe73b768b31b0846d9994f0ba43fa10ffe348e35c857b4f69" Jan 31 05:15:03 crc kubenswrapper[4832]: I0131 05:15:03.383877 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497275-8p5ns" Jan 31 05:15:16 crc kubenswrapper[4832]: I0131 05:15:16.596512 4832 scope.go:117] "RemoveContainer" containerID="67ea7412fe852647062dfbe0a35ea651ad709dc0c0cb3188731f82144faa72d1" Jan 31 05:15:25 crc kubenswrapper[4832]: I0131 05:15:25.651694 4832 generic.go:334] "Generic (PLEG): container finished" podID="8ab8bc58-9ae3-4284-b959-164da6ebee5e" containerID="26f46c592c95d7ef390d87619eb44755c686739c04cf14547905e9f19a9407ee" exitCode=0 Jan 31 05:15:25 crc kubenswrapper[4832]: I0131 05:15:25.651764 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" event={"ID":"8ab8bc58-9ae3-4284-b959-164da6ebee5e","Type":"ContainerDied","Data":"26f46c592c95d7ef390d87619eb44755c686739c04cf14547905e9f19a9407ee"} Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.134130 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.236711 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.236759 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-ovn-combined-ca-bundle\") pod \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.236849 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-repo-setup-combined-ca-bundle\") pod \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.236888 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-inventory\") pod \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.236937 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-ssh-key-openstack-edpm-ipam\") pod \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.236965 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.237002 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-bootstrap-combined-ca-bundle\") pod \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.237033 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-libvirt-combined-ca-bundle\") pod \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.237067 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.237088 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.237124 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gss2\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-kube-api-access-9gss2\") pod \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.237156 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-nova-combined-ca-bundle\") pod \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.237178 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-neutron-metadata-combined-ca-bundle\") pod \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.237196 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-telemetry-combined-ca-bundle\") pod \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\" (UID: \"8ab8bc58-9ae3-4284-b959-164da6ebee5e\") " Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.248166 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-kube-api-access-9gss2" (OuterVolumeSpecName: "kube-api-access-9gss2") pod "8ab8bc58-9ae3-4284-b959-164da6ebee5e" (UID: "8ab8bc58-9ae3-4284-b959-164da6ebee5e"). InnerVolumeSpecName "kube-api-access-9gss2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.248806 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "8ab8bc58-9ae3-4284-b959-164da6ebee5e" (UID: "8ab8bc58-9ae3-4284-b959-164da6ebee5e"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.248868 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "8ab8bc58-9ae3-4284-b959-164da6ebee5e" (UID: "8ab8bc58-9ae3-4284-b959-164da6ebee5e"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.249054 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "8ab8bc58-9ae3-4284-b959-164da6ebee5e" (UID: "8ab8bc58-9ae3-4284-b959-164da6ebee5e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.249076 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "8ab8bc58-9ae3-4284-b959-164da6ebee5e" (UID: "8ab8bc58-9ae3-4284-b959-164da6ebee5e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.249249 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "8ab8bc58-9ae3-4284-b959-164da6ebee5e" (UID: "8ab8bc58-9ae3-4284-b959-164da6ebee5e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.249326 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "8ab8bc58-9ae3-4284-b959-164da6ebee5e" (UID: "8ab8bc58-9ae3-4284-b959-164da6ebee5e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.249841 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8ab8bc58-9ae3-4284-b959-164da6ebee5e" (UID: "8ab8bc58-9ae3-4284-b959-164da6ebee5e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.249990 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "8ab8bc58-9ae3-4284-b959-164da6ebee5e" (UID: "8ab8bc58-9ae3-4284-b959-164da6ebee5e"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.250918 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "8ab8bc58-9ae3-4284-b959-164da6ebee5e" (UID: "8ab8bc58-9ae3-4284-b959-164da6ebee5e"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.253149 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8ab8bc58-9ae3-4284-b959-164da6ebee5e" (UID: "8ab8bc58-9ae3-4284-b959-164da6ebee5e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.257355 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "8ab8bc58-9ae3-4284-b959-164da6ebee5e" (UID: "8ab8bc58-9ae3-4284-b959-164da6ebee5e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.276322 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8ab8bc58-9ae3-4284-b959-164da6ebee5e" (UID: "8ab8bc58-9ae3-4284-b959-164da6ebee5e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.287167 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-inventory" (OuterVolumeSpecName: "inventory") pod "8ab8bc58-9ae3-4284-b959-164da6ebee5e" (UID: "8ab8bc58-9ae3-4284-b959-164da6ebee5e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.339844 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.340237 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.340343 4832 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.340454 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.340544 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.340656 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.340729 4832 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.340800 4832 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.340872 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.340947 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.341020 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gss2\" (UniqueName: \"kubernetes.io/projected/8ab8bc58-9ae3-4284-b959-164da6ebee5e-kube-api-access-9gss2\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.341095 4832 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.341162 4832 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.341234 4832 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab8bc58-9ae3-4284-b959-164da6ebee5e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.676323 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" event={"ID":"8ab8bc58-9ae3-4284-b959-164da6ebee5e","Type":"ContainerDied","Data":"fb42fa25b5c3a7264e951a5f15864f91b0bc65727658fb5af47a8d0b7701abf0"} Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.676418 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mzk88" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.676700 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb42fa25b5c3a7264e951a5f15864f91b0bc65727658fb5af47a8d0b7701abf0" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.905893 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj"] Jan 31 05:15:27 crc kubenswrapper[4832]: E0131 05:15:27.906481 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d" containerName="collect-profiles" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.906515 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d" containerName="collect-profiles" Jan 31 05:15:27 crc kubenswrapper[4832]: E0131 05:15:27.906677 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab8bc58-9ae3-4284-b959-164da6ebee5e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.906695 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab8bc58-9ae3-4284-b959-164da6ebee5e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.906937 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ab8bc58-9ae3-4284-b959-164da6ebee5e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.906967 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="362d0ebd-ed1b-4bac-9e56-e4ed36b41c7d" containerName="collect-profiles" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.921268 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.925176 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj"] Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.930586 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.930674 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.931063 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.931787 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:15:27 crc kubenswrapper[4832]: I0131 05:15:27.932183 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.061310 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pvjvj\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.061816 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pvjvj\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.061924 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pvjvj\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.062148 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/70dab5d9-fca1-425f-91e9-42b0013c2e64-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pvjvj\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.062203 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn48g\" (UniqueName: \"kubernetes.io/projected/70dab5d9-fca1-425f-91e9-42b0013c2e64-kube-api-access-tn48g\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pvjvj\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.164042 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pvjvj\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.164223 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pvjvj\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.164399 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/70dab5d9-fca1-425f-91e9-42b0013c2e64-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pvjvj\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.164471 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn48g\" (UniqueName: \"kubernetes.io/projected/70dab5d9-fca1-425f-91e9-42b0013c2e64-kube-api-access-tn48g\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pvjvj\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.164623 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pvjvj\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.165763 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/70dab5d9-fca1-425f-91e9-42b0013c2e64-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pvjvj\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.169914 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pvjvj\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.170083 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pvjvj\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.170730 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pvjvj\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.183290 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn48g\" (UniqueName: \"kubernetes.io/projected/70dab5d9-fca1-425f-91e9-42b0013c2e64-kube-api-access-tn48g\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pvjvj\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.249432 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:15:28 crc kubenswrapper[4832]: I0131 05:15:28.877326 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj"] Jan 31 05:15:28 crc kubenswrapper[4832]: W0131 05:15:28.881652 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70dab5d9_fca1_425f_91e9_42b0013c2e64.slice/crio-0779186ff2d07bc97fb143538eaea5845fd06f84b82f02058b89ed82556e5de9 WatchSource:0}: Error finding container 0779186ff2d07bc97fb143538eaea5845fd06f84b82f02058b89ed82556e5de9: Status 404 returned error can't find the container with id 0779186ff2d07bc97fb143538eaea5845fd06f84b82f02058b89ed82556e5de9 Jan 31 05:15:29 crc kubenswrapper[4832]: I0131 05:15:29.698987 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" event={"ID":"70dab5d9-fca1-425f-91e9-42b0013c2e64","Type":"ContainerStarted","Data":"0779186ff2d07bc97fb143538eaea5845fd06f84b82f02058b89ed82556e5de9"} Jan 31 05:15:30 crc kubenswrapper[4832]: I0131 05:15:30.712498 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" event={"ID":"70dab5d9-fca1-425f-91e9-42b0013c2e64","Type":"ContainerStarted","Data":"c1e30a27f1d167f02a401b5878473dc891cbf336e550f06a5dd02c883e20edcd"} Jan 31 05:15:30 crc kubenswrapper[4832]: I0131 05:15:30.739378 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" podStartSLOduration=3.10626346 podStartE2EDuration="3.739358321s" podCreationTimestamp="2026-01-31 05:15:27 +0000 UTC" firstStartedPulling="2026-01-31 05:15:28.885987579 +0000 UTC m=+1937.834809294" lastFinishedPulling="2026-01-31 05:15:29.51908243 +0000 UTC m=+1938.467904155" observedRunningTime="2026-01-31 05:15:30.734964693 +0000 UTC m=+1939.683786418" watchObservedRunningTime="2026-01-31 05:15:30.739358321 +0000 UTC m=+1939.688180016" Jan 31 05:16:18 crc kubenswrapper[4832]: I0131 05:16:18.540158 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:16:18 crc kubenswrapper[4832]: I0131 05:16:18.541014 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:16:34 crc kubenswrapper[4832]: I0131 05:16:34.353915 4832 generic.go:334] "Generic (PLEG): container finished" podID="70dab5d9-fca1-425f-91e9-42b0013c2e64" containerID="c1e30a27f1d167f02a401b5878473dc891cbf336e550f06a5dd02c883e20edcd" exitCode=0 Jan 31 05:16:34 crc kubenswrapper[4832]: I0131 05:16:34.354060 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" event={"ID":"70dab5d9-fca1-425f-91e9-42b0013c2e64","Type":"ContainerDied","Data":"c1e30a27f1d167f02a401b5878473dc891cbf336e550f06a5dd02c883e20edcd"} Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.795272 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.870371 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-inventory\") pod \"70dab5d9-fca1-425f-91e9-42b0013c2e64\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.870713 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn48g\" (UniqueName: \"kubernetes.io/projected/70dab5d9-fca1-425f-91e9-42b0013c2e64-kube-api-access-tn48g\") pod \"70dab5d9-fca1-425f-91e9-42b0013c2e64\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.870736 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-ovn-combined-ca-bundle\") pod \"70dab5d9-fca1-425f-91e9-42b0013c2e64\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.870786 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/70dab5d9-fca1-425f-91e9-42b0013c2e64-ovncontroller-config-0\") pod \"70dab5d9-fca1-425f-91e9-42b0013c2e64\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.870818 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-ssh-key-openstack-edpm-ipam\") pod \"70dab5d9-fca1-425f-91e9-42b0013c2e64\" (UID: \"70dab5d9-fca1-425f-91e9-42b0013c2e64\") " Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.889913 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70dab5d9-fca1-425f-91e9-42b0013c2e64-kube-api-access-tn48g" (OuterVolumeSpecName: "kube-api-access-tn48g") pod "70dab5d9-fca1-425f-91e9-42b0013c2e64" (UID: "70dab5d9-fca1-425f-91e9-42b0013c2e64"). InnerVolumeSpecName "kube-api-access-tn48g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.889968 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "70dab5d9-fca1-425f-91e9-42b0013c2e64" (UID: "70dab5d9-fca1-425f-91e9-42b0013c2e64"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.900692 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-inventory" (OuterVolumeSpecName: "inventory") pod "70dab5d9-fca1-425f-91e9-42b0013c2e64" (UID: "70dab5d9-fca1-425f-91e9-42b0013c2e64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.924230 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70dab5d9-fca1-425f-91e9-42b0013c2e64-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "70dab5d9-fca1-425f-91e9-42b0013c2e64" (UID: "70dab5d9-fca1-425f-91e9-42b0013c2e64"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.935784 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "70dab5d9-fca1-425f-91e9-42b0013c2e64" (UID: "70dab5d9-fca1-425f-91e9-42b0013c2e64"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.973275 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.973323 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn48g\" (UniqueName: \"kubernetes.io/projected/70dab5d9-fca1-425f-91e9-42b0013c2e64-kube-api-access-tn48g\") on node \"crc\" DevicePath \"\"" Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.973345 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.973368 4832 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/70dab5d9-fca1-425f-91e9-42b0013c2e64-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:16:35 crc kubenswrapper[4832]: I0131 05:16:35.973386 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70dab5d9-fca1-425f-91e9-42b0013c2e64-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.377972 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" event={"ID":"70dab5d9-fca1-425f-91e9-42b0013c2e64","Type":"ContainerDied","Data":"0779186ff2d07bc97fb143538eaea5845fd06f84b82f02058b89ed82556e5de9"} Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.378007 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pvjvj" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.378009 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0779186ff2d07bc97fb143538eaea5845fd06f84b82f02058b89ed82556e5de9" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.491379 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs"] Jan 31 05:16:36 crc kubenswrapper[4832]: E0131 05:16:36.492049 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70dab5d9-fca1-425f-91e9-42b0013c2e64" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.492083 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="70dab5d9-fca1-425f-91e9-42b0013c2e64" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.492381 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="70dab5d9-fca1-425f-91e9-42b0013c2e64" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.493385 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.495610 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.495976 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.496055 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.497168 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.497756 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.498175 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.506853 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs"] Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.585999 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.586059 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.586118 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq9ch\" (UniqueName: \"kubernetes.io/projected/ce3f980d-61a1-4d42-8b56-f7a064c667da-kube-api-access-lq9ch\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.586191 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.586240 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.586272 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.687199 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.687254 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq9ch\" (UniqueName: \"kubernetes.io/projected/ce3f980d-61a1-4d42-8b56-f7a064c667da-kube-api-access-lq9ch\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.687313 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.687600 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.687631 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.687705 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.692553 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.693671 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.693880 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.693915 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.694221 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.703032 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq9ch\" (UniqueName: \"kubernetes.io/projected/ce3f980d-61a1-4d42-8b56-f7a064c667da-kube-api-access-lq9ch\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:36 crc kubenswrapper[4832]: I0131 05:16:36.833957 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:16:37 crc kubenswrapper[4832]: I0131 05:16:37.384811 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:16:37 crc kubenswrapper[4832]: I0131 05:16:37.390186 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs"] Jan 31 05:16:38 crc kubenswrapper[4832]: I0131 05:16:38.398760 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" event={"ID":"ce3f980d-61a1-4d42-8b56-f7a064c667da","Type":"ContainerStarted","Data":"f28ea25af9976d6e4bc496ece4fdfe4f0bdcb312a68dcef713a96d2d2d7f4f91"} Jan 31 05:16:38 crc kubenswrapper[4832]: I0131 05:16:38.399225 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" event={"ID":"ce3f980d-61a1-4d42-8b56-f7a064c667da","Type":"ContainerStarted","Data":"4437e7357f8c7a1e8c3f265092fc31519918361b0aff536c944d52063289c9d3"} Jan 31 05:16:38 crc kubenswrapper[4832]: I0131 05:16:38.425049 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" podStartSLOduration=1.935777466 podStartE2EDuration="2.4250334s" podCreationTimestamp="2026-01-31 05:16:36 +0000 UTC" firstStartedPulling="2026-01-31 05:16:37.384435149 +0000 UTC m=+2006.333256854" lastFinishedPulling="2026-01-31 05:16:37.873691063 +0000 UTC m=+2006.822512788" observedRunningTime="2026-01-31 05:16:38.422517931 +0000 UTC m=+2007.371339656" watchObservedRunningTime="2026-01-31 05:16:38.4250334 +0000 UTC m=+2007.373855085" Jan 31 05:16:48 crc kubenswrapper[4832]: I0131 05:16:48.540093 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:16:48 crc kubenswrapper[4832]: I0131 05:16:48.542694 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:17:18 crc kubenswrapper[4832]: I0131 05:17:18.540918 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:17:18 crc kubenswrapper[4832]: I0131 05:17:18.541765 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:17:18 crc kubenswrapper[4832]: I0131 05:17:18.541831 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 05:17:18 crc kubenswrapper[4832]: I0131 05:17:18.542912 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6bedd6a8a1f4f5a4ad06b8307a6d0f1f23ba9781c43cb4e926f0e3a33ef0bb53"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:17:18 crc kubenswrapper[4832]: I0131 05:17:18.543023 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://6bedd6a8a1f4f5a4ad06b8307a6d0f1f23ba9781c43cb4e926f0e3a33ef0bb53" gracePeriod=600 Jan 31 05:17:18 crc kubenswrapper[4832]: I0131 05:17:18.845774 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="6bedd6a8a1f4f5a4ad06b8307a6d0f1f23ba9781c43cb4e926f0e3a33ef0bb53" exitCode=0 Jan 31 05:17:18 crc kubenswrapper[4832]: I0131 05:17:18.845830 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"6bedd6a8a1f4f5a4ad06b8307a6d0f1f23ba9781c43cb4e926f0e3a33ef0bb53"} Jan 31 05:17:18 crc kubenswrapper[4832]: I0131 05:17:18.846310 4832 scope.go:117] "RemoveContainer" containerID="1ebe71fcba0be1629b9cde2717fe15513a8de22dc0b3531a39de07c30b731c90" Jan 31 05:17:19 crc kubenswrapper[4832]: I0131 05:17:19.871970 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70"} Jan 31 05:17:26 crc kubenswrapper[4832]: I0131 05:17:26.927372 4832 generic.go:334] "Generic (PLEG): container finished" podID="ce3f980d-61a1-4d42-8b56-f7a064c667da" containerID="f28ea25af9976d6e4bc496ece4fdfe4f0bdcb312a68dcef713a96d2d2d7f4f91" exitCode=0 Jan 31 05:17:26 crc kubenswrapper[4832]: I0131 05:17:26.927475 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" event={"ID":"ce3f980d-61a1-4d42-8b56-f7a064c667da","Type":"ContainerDied","Data":"f28ea25af9976d6e4bc496ece4fdfe4f0bdcb312a68dcef713a96d2d2d7f4f91"} Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.358120 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.520143 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-nova-metadata-neutron-config-0\") pod \"ce3f980d-61a1-4d42-8b56-f7a064c667da\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.520493 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-neutron-ovn-metadata-agent-neutron-config-0\") pod \"ce3f980d-61a1-4d42-8b56-f7a064c667da\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.520686 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-ssh-key-openstack-edpm-ipam\") pod \"ce3f980d-61a1-4d42-8b56-f7a064c667da\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.520768 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-inventory\") pod \"ce3f980d-61a1-4d42-8b56-f7a064c667da\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.520879 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-neutron-metadata-combined-ca-bundle\") pod \"ce3f980d-61a1-4d42-8b56-f7a064c667da\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.521012 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq9ch\" (UniqueName: \"kubernetes.io/projected/ce3f980d-61a1-4d42-8b56-f7a064c667da-kube-api-access-lq9ch\") pod \"ce3f980d-61a1-4d42-8b56-f7a064c667da\" (UID: \"ce3f980d-61a1-4d42-8b56-f7a064c667da\") " Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.526462 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ce3f980d-61a1-4d42-8b56-f7a064c667da" (UID: "ce3f980d-61a1-4d42-8b56-f7a064c667da"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.528746 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce3f980d-61a1-4d42-8b56-f7a064c667da-kube-api-access-lq9ch" (OuterVolumeSpecName: "kube-api-access-lq9ch") pod "ce3f980d-61a1-4d42-8b56-f7a064c667da" (UID: "ce3f980d-61a1-4d42-8b56-f7a064c667da"). InnerVolumeSpecName "kube-api-access-lq9ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.550915 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ce3f980d-61a1-4d42-8b56-f7a064c667da" (UID: "ce3f980d-61a1-4d42-8b56-f7a064c667da"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.556611 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "ce3f980d-61a1-4d42-8b56-f7a064c667da" (UID: "ce3f980d-61a1-4d42-8b56-f7a064c667da"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.558705 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "ce3f980d-61a1-4d42-8b56-f7a064c667da" (UID: "ce3f980d-61a1-4d42-8b56-f7a064c667da"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.563913 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-inventory" (OuterVolumeSpecName: "inventory") pod "ce3f980d-61a1-4d42-8b56-f7a064c667da" (UID: "ce3f980d-61a1-4d42-8b56-f7a064c667da"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.625177 4832 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.625401 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.625465 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.625526 4832 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.625600 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq9ch\" (UniqueName: \"kubernetes.io/projected/ce3f980d-61a1-4d42-8b56-f7a064c667da-kube-api-access-lq9ch\") on node \"crc\" DevicePath \"\"" Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.625674 4832 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ce3f980d-61a1-4d42-8b56-f7a064c667da-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.950630 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" event={"ID":"ce3f980d-61a1-4d42-8b56-f7a064c667da","Type":"ContainerDied","Data":"4437e7357f8c7a1e8c3f265092fc31519918361b0aff536c944d52063289c9d3"} Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.950726 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4437e7357f8c7a1e8c3f265092fc31519918361b0aff536c944d52063289c9d3" Jan 31 05:17:28 crc kubenswrapper[4832]: I0131 05:17:28.950748 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.086389 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj"] Jan 31 05:17:29 crc kubenswrapper[4832]: E0131 05:17:29.087174 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3f980d-61a1-4d42-8b56-f7a064c667da" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.087283 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3f980d-61a1-4d42-8b56-f7a064c667da" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.087588 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce3f980d-61a1-4d42-8b56-f7a064c667da" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.088287 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.090784 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.091071 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.091357 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.091598 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.092056 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.097285 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj"] Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.236743 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qppj\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.236847 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qppj\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.236930 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh8lm\" (UniqueName: \"kubernetes.io/projected/89932c58-5727-49df-bd91-903acb18f444-kube-api-access-lh8lm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qppj\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.236949 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qppj\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.236965 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qppj\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.338418 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qppj\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.339049 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh8lm\" (UniqueName: \"kubernetes.io/projected/89932c58-5727-49df-bd91-903acb18f444-kube-api-access-lh8lm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qppj\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.339593 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qppj\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.340113 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qppj\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.340321 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qppj\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.345728 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qppj\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.347084 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qppj\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.348051 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qppj\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.350664 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qppj\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.359979 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh8lm\" (UniqueName: \"kubernetes.io/projected/89932c58-5727-49df-bd91-903acb18f444-kube-api-access-lh8lm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qppj\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.406175 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.734232 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj"] Jan 31 05:17:29 crc kubenswrapper[4832]: I0131 05:17:29.961021 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" event={"ID":"89932c58-5727-49df-bd91-903acb18f444","Type":"ContainerStarted","Data":"c610610e5ddcc28e7fcc8ac06c9f15240f733670c57981d14a1baa2a02192b9a"} Jan 31 05:17:30 crc kubenswrapper[4832]: I0131 05:17:30.969468 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" event={"ID":"89932c58-5727-49df-bd91-903acb18f444","Type":"ContainerStarted","Data":"aec3a46dff5f57fbe2f45f286a5dd85ce411fa1cfb19059347780380fed4c7c0"} Jan 31 05:17:30 crc kubenswrapper[4832]: I0131 05:17:30.990379 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" podStartSLOduration=1.5841784909999999 podStartE2EDuration="1.99035657s" podCreationTimestamp="2026-01-31 05:17:29 +0000 UTC" firstStartedPulling="2026-01-31 05:17:29.740646923 +0000 UTC m=+2058.689468608" lastFinishedPulling="2026-01-31 05:17:30.146825002 +0000 UTC m=+2059.095646687" observedRunningTime="2026-01-31 05:17:30.985877641 +0000 UTC m=+2059.934699336" watchObservedRunningTime="2026-01-31 05:17:30.99035657 +0000 UTC m=+2059.939178265" Jan 31 05:18:10 crc kubenswrapper[4832]: I0131 05:18:10.025984 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9qmq8"] Jan 31 05:18:10 crc kubenswrapper[4832]: I0131 05:18:10.032887 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:10 crc kubenswrapper[4832]: I0131 05:18:10.052412 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9qmq8"] Jan 31 05:18:10 crc kubenswrapper[4832]: I0131 05:18:10.143163 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/393ed00b-d41b-41a3-baf8-3e40ed96334c-catalog-content\") pod \"certified-operators-9qmq8\" (UID: \"393ed00b-d41b-41a3-baf8-3e40ed96334c\") " pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:10 crc kubenswrapper[4832]: I0131 05:18:10.143407 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/393ed00b-d41b-41a3-baf8-3e40ed96334c-utilities\") pod \"certified-operators-9qmq8\" (UID: \"393ed00b-d41b-41a3-baf8-3e40ed96334c\") " pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:10 crc kubenswrapper[4832]: I0131 05:18:10.143615 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns7m5\" (UniqueName: \"kubernetes.io/projected/393ed00b-d41b-41a3-baf8-3e40ed96334c-kube-api-access-ns7m5\") pod \"certified-operators-9qmq8\" (UID: \"393ed00b-d41b-41a3-baf8-3e40ed96334c\") " pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:10 crc kubenswrapper[4832]: I0131 05:18:10.246608 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns7m5\" (UniqueName: \"kubernetes.io/projected/393ed00b-d41b-41a3-baf8-3e40ed96334c-kube-api-access-ns7m5\") pod \"certified-operators-9qmq8\" (UID: \"393ed00b-d41b-41a3-baf8-3e40ed96334c\") " pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:10 crc kubenswrapper[4832]: I0131 05:18:10.246745 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/393ed00b-d41b-41a3-baf8-3e40ed96334c-catalog-content\") pod \"certified-operators-9qmq8\" (UID: \"393ed00b-d41b-41a3-baf8-3e40ed96334c\") " pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:10 crc kubenswrapper[4832]: I0131 05:18:10.246771 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/393ed00b-d41b-41a3-baf8-3e40ed96334c-utilities\") pod \"certified-operators-9qmq8\" (UID: \"393ed00b-d41b-41a3-baf8-3e40ed96334c\") " pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:10 crc kubenswrapper[4832]: I0131 05:18:10.247308 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/393ed00b-d41b-41a3-baf8-3e40ed96334c-catalog-content\") pod \"certified-operators-9qmq8\" (UID: \"393ed00b-d41b-41a3-baf8-3e40ed96334c\") " pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:10 crc kubenswrapper[4832]: I0131 05:18:10.247431 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/393ed00b-d41b-41a3-baf8-3e40ed96334c-utilities\") pod \"certified-operators-9qmq8\" (UID: \"393ed00b-d41b-41a3-baf8-3e40ed96334c\") " pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:10 crc kubenswrapper[4832]: I0131 05:18:10.266795 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns7m5\" (UniqueName: \"kubernetes.io/projected/393ed00b-d41b-41a3-baf8-3e40ed96334c-kube-api-access-ns7m5\") pod \"certified-operators-9qmq8\" (UID: \"393ed00b-d41b-41a3-baf8-3e40ed96334c\") " pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:10 crc kubenswrapper[4832]: I0131 05:18:10.370021 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:10 crc kubenswrapper[4832]: I0131 05:18:10.939712 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9qmq8"] Jan 31 05:18:11 crc kubenswrapper[4832]: I0131 05:18:11.414253 4832 generic.go:334] "Generic (PLEG): container finished" podID="393ed00b-d41b-41a3-baf8-3e40ed96334c" containerID="ec1454d8bd630c8457acafda08fb70ecb3e78c65a9094f8bb2d0cf38a4ea442b" exitCode=0 Jan 31 05:18:11 crc kubenswrapper[4832]: I0131 05:18:11.414333 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qmq8" event={"ID":"393ed00b-d41b-41a3-baf8-3e40ed96334c","Type":"ContainerDied","Data":"ec1454d8bd630c8457acafda08fb70ecb3e78c65a9094f8bb2d0cf38a4ea442b"} Jan 31 05:18:11 crc kubenswrapper[4832]: I0131 05:18:11.414415 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qmq8" event={"ID":"393ed00b-d41b-41a3-baf8-3e40ed96334c","Type":"ContainerStarted","Data":"f7fbf69054cf0795867908f60d92b0aa08a262e901d91370c98c268e957a18e2"} Jan 31 05:18:12 crc kubenswrapper[4832]: I0131 05:18:12.425779 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qmq8" event={"ID":"393ed00b-d41b-41a3-baf8-3e40ed96334c","Type":"ContainerStarted","Data":"77dabe6537ba97679fd5f0e1c72c4e4cc25ccb479ad00cfd5839ed4d4caf4079"} Jan 31 05:18:13 crc kubenswrapper[4832]: I0131 05:18:13.438196 4832 generic.go:334] "Generic (PLEG): container finished" podID="393ed00b-d41b-41a3-baf8-3e40ed96334c" containerID="77dabe6537ba97679fd5f0e1c72c4e4cc25ccb479ad00cfd5839ed4d4caf4079" exitCode=0 Jan 31 05:18:13 crc kubenswrapper[4832]: I0131 05:18:13.438246 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qmq8" event={"ID":"393ed00b-d41b-41a3-baf8-3e40ed96334c","Type":"ContainerDied","Data":"77dabe6537ba97679fd5f0e1c72c4e4cc25ccb479ad00cfd5839ed4d4caf4079"} Jan 31 05:18:14 crc kubenswrapper[4832]: I0131 05:18:14.451741 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qmq8" event={"ID":"393ed00b-d41b-41a3-baf8-3e40ed96334c","Type":"ContainerStarted","Data":"ae841657d07295b326a90e49a67857acf7a802d2e81611a2ba8fd376e25167ad"} Jan 31 05:18:14 crc kubenswrapper[4832]: I0131 05:18:14.476755 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9qmq8" podStartSLOduration=3.072445558 podStartE2EDuration="5.476732622s" podCreationTimestamp="2026-01-31 05:18:09 +0000 UTC" firstStartedPulling="2026-01-31 05:18:11.417424616 +0000 UTC m=+2100.366246301" lastFinishedPulling="2026-01-31 05:18:13.82171168 +0000 UTC m=+2102.770533365" observedRunningTime="2026-01-31 05:18:14.472869122 +0000 UTC m=+2103.421690817" watchObservedRunningTime="2026-01-31 05:18:14.476732622 +0000 UTC m=+2103.425554307" Jan 31 05:18:20 crc kubenswrapper[4832]: I0131 05:18:20.370304 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:20 crc kubenswrapper[4832]: I0131 05:18:20.371046 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:20 crc kubenswrapper[4832]: I0131 05:18:20.476073 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:20 crc kubenswrapper[4832]: I0131 05:18:20.559210 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:20 crc kubenswrapper[4832]: I0131 05:18:20.715117 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9qmq8"] Jan 31 05:18:22 crc kubenswrapper[4832]: I0131 05:18:22.534756 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9qmq8" podUID="393ed00b-d41b-41a3-baf8-3e40ed96334c" containerName="registry-server" containerID="cri-o://ae841657d07295b326a90e49a67857acf7a802d2e81611a2ba8fd376e25167ad" gracePeriod=2 Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.164458 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.241970 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns7m5\" (UniqueName: \"kubernetes.io/projected/393ed00b-d41b-41a3-baf8-3e40ed96334c-kube-api-access-ns7m5\") pod \"393ed00b-d41b-41a3-baf8-3e40ed96334c\" (UID: \"393ed00b-d41b-41a3-baf8-3e40ed96334c\") " Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.242260 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/393ed00b-d41b-41a3-baf8-3e40ed96334c-catalog-content\") pod \"393ed00b-d41b-41a3-baf8-3e40ed96334c\" (UID: \"393ed00b-d41b-41a3-baf8-3e40ed96334c\") " Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.242351 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/393ed00b-d41b-41a3-baf8-3e40ed96334c-utilities\") pod \"393ed00b-d41b-41a3-baf8-3e40ed96334c\" (UID: \"393ed00b-d41b-41a3-baf8-3e40ed96334c\") " Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.243757 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/393ed00b-d41b-41a3-baf8-3e40ed96334c-utilities" (OuterVolumeSpecName: "utilities") pod "393ed00b-d41b-41a3-baf8-3e40ed96334c" (UID: "393ed00b-d41b-41a3-baf8-3e40ed96334c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.253467 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/393ed00b-d41b-41a3-baf8-3e40ed96334c-kube-api-access-ns7m5" (OuterVolumeSpecName: "kube-api-access-ns7m5") pod "393ed00b-d41b-41a3-baf8-3e40ed96334c" (UID: "393ed00b-d41b-41a3-baf8-3e40ed96334c"). InnerVolumeSpecName "kube-api-access-ns7m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.345612 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/393ed00b-d41b-41a3-baf8-3e40ed96334c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.345652 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns7m5\" (UniqueName: \"kubernetes.io/projected/393ed00b-d41b-41a3-baf8-3e40ed96334c-kube-api-access-ns7m5\") on node \"crc\" DevicePath \"\"" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.549419 4832 generic.go:334] "Generic (PLEG): container finished" podID="393ed00b-d41b-41a3-baf8-3e40ed96334c" containerID="ae841657d07295b326a90e49a67857acf7a802d2e81611a2ba8fd376e25167ad" exitCode=0 Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.549480 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qmq8" event={"ID":"393ed00b-d41b-41a3-baf8-3e40ed96334c","Type":"ContainerDied","Data":"ae841657d07295b326a90e49a67857acf7a802d2e81611a2ba8fd376e25167ad"} Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.549516 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qmq8" event={"ID":"393ed00b-d41b-41a3-baf8-3e40ed96334c","Type":"ContainerDied","Data":"f7fbf69054cf0795867908f60d92b0aa08a262e901d91370c98c268e957a18e2"} Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.549538 4832 scope.go:117] "RemoveContainer" containerID="ae841657d07295b326a90e49a67857acf7a802d2e81611a2ba8fd376e25167ad" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.549754 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qmq8" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.581071 4832 scope.go:117] "RemoveContainer" containerID="77dabe6537ba97679fd5f0e1c72c4e4cc25ccb479ad00cfd5839ed4d4caf4079" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.617773 4832 scope.go:117] "RemoveContainer" containerID="ec1454d8bd630c8457acafda08fb70ecb3e78c65a9094f8bb2d0cf38a4ea442b" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.684802 4832 scope.go:117] "RemoveContainer" containerID="ae841657d07295b326a90e49a67857acf7a802d2e81611a2ba8fd376e25167ad" Jan 31 05:18:23 crc kubenswrapper[4832]: E0131 05:18:23.685363 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae841657d07295b326a90e49a67857acf7a802d2e81611a2ba8fd376e25167ad\": container with ID starting with ae841657d07295b326a90e49a67857acf7a802d2e81611a2ba8fd376e25167ad not found: ID does not exist" containerID="ae841657d07295b326a90e49a67857acf7a802d2e81611a2ba8fd376e25167ad" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.685409 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae841657d07295b326a90e49a67857acf7a802d2e81611a2ba8fd376e25167ad"} err="failed to get container status \"ae841657d07295b326a90e49a67857acf7a802d2e81611a2ba8fd376e25167ad\": rpc error: code = NotFound desc = could not find container \"ae841657d07295b326a90e49a67857acf7a802d2e81611a2ba8fd376e25167ad\": container with ID starting with ae841657d07295b326a90e49a67857acf7a802d2e81611a2ba8fd376e25167ad not found: ID does not exist" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.685441 4832 scope.go:117] "RemoveContainer" containerID="77dabe6537ba97679fd5f0e1c72c4e4cc25ccb479ad00cfd5839ed4d4caf4079" Jan 31 05:18:23 crc kubenswrapper[4832]: E0131 05:18:23.685940 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77dabe6537ba97679fd5f0e1c72c4e4cc25ccb479ad00cfd5839ed4d4caf4079\": container with ID starting with 77dabe6537ba97679fd5f0e1c72c4e4cc25ccb479ad00cfd5839ed4d4caf4079 not found: ID does not exist" containerID="77dabe6537ba97679fd5f0e1c72c4e4cc25ccb479ad00cfd5839ed4d4caf4079" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.685968 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77dabe6537ba97679fd5f0e1c72c4e4cc25ccb479ad00cfd5839ed4d4caf4079"} err="failed to get container status \"77dabe6537ba97679fd5f0e1c72c4e4cc25ccb479ad00cfd5839ed4d4caf4079\": rpc error: code = NotFound desc = could not find container \"77dabe6537ba97679fd5f0e1c72c4e4cc25ccb479ad00cfd5839ed4d4caf4079\": container with ID starting with 77dabe6537ba97679fd5f0e1c72c4e4cc25ccb479ad00cfd5839ed4d4caf4079 not found: ID does not exist" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.685982 4832 scope.go:117] "RemoveContainer" containerID="ec1454d8bd630c8457acafda08fb70ecb3e78c65a9094f8bb2d0cf38a4ea442b" Jan 31 05:18:23 crc kubenswrapper[4832]: E0131 05:18:23.686234 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec1454d8bd630c8457acafda08fb70ecb3e78c65a9094f8bb2d0cf38a4ea442b\": container with ID starting with ec1454d8bd630c8457acafda08fb70ecb3e78c65a9094f8bb2d0cf38a4ea442b not found: ID does not exist" containerID="ec1454d8bd630c8457acafda08fb70ecb3e78c65a9094f8bb2d0cf38a4ea442b" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.686259 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec1454d8bd630c8457acafda08fb70ecb3e78c65a9094f8bb2d0cf38a4ea442b"} err="failed to get container status \"ec1454d8bd630c8457acafda08fb70ecb3e78c65a9094f8bb2d0cf38a4ea442b\": rpc error: code = NotFound desc = could not find container \"ec1454d8bd630c8457acafda08fb70ecb3e78c65a9094f8bb2d0cf38a4ea442b\": container with ID starting with ec1454d8bd630c8457acafda08fb70ecb3e78c65a9094f8bb2d0cf38a4ea442b not found: ID does not exist" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.854151 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/393ed00b-d41b-41a3-baf8-3e40ed96334c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "393ed00b-d41b-41a3-baf8-3e40ed96334c" (UID: "393ed00b-d41b-41a3-baf8-3e40ed96334c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:18:23 crc kubenswrapper[4832]: I0131 05:18:23.860202 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/393ed00b-d41b-41a3-baf8-3e40ed96334c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:18:24 crc kubenswrapper[4832]: I0131 05:18:24.174377 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9qmq8"] Jan 31 05:18:24 crc kubenswrapper[4832]: I0131 05:18:24.181556 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9qmq8"] Jan 31 05:18:25 crc kubenswrapper[4832]: I0131 05:18:25.874015 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="393ed00b-d41b-41a3-baf8-3e40ed96334c" path="/var/lib/kubelet/pods/393ed00b-d41b-41a3-baf8-3e40ed96334c/volumes" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.130027 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gqnbv"] Jan 31 05:18:26 crc kubenswrapper[4832]: E0131 05:18:26.130726 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="393ed00b-d41b-41a3-baf8-3e40ed96334c" containerName="extract-content" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.130760 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="393ed00b-d41b-41a3-baf8-3e40ed96334c" containerName="extract-content" Jan 31 05:18:26 crc kubenswrapper[4832]: E0131 05:18:26.130801 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="393ed00b-d41b-41a3-baf8-3e40ed96334c" containerName="extract-utilities" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.130814 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="393ed00b-d41b-41a3-baf8-3e40ed96334c" containerName="extract-utilities" Jan 31 05:18:26 crc kubenswrapper[4832]: E0131 05:18:26.130857 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="393ed00b-d41b-41a3-baf8-3e40ed96334c" containerName="registry-server" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.130865 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="393ed00b-d41b-41a3-baf8-3e40ed96334c" containerName="registry-server" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.131104 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="393ed00b-d41b-41a3-baf8-3e40ed96334c" containerName="registry-server" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.133160 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.164123 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqnbv"] Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.225946 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f069912-6246-42e1-8ea0-6ee03a6f3072-utilities\") pod \"redhat-operators-gqnbv\" (UID: \"7f069912-6246-42e1-8ea0-6ee03a6f3072\") " pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.226024 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f069912-6246-42e1-8ea0-6ee03a6f3072-catalog-content\") pod \"redhat-operators-gqnbv\" (UID: \"7f069912-6246-42e1-8ea0-6ee03a6f3072\") " pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.226092 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf5sn\" (UniqueName: \"kubernetes.io/projected/7f069912-6246-42e1-8ea0-6ee03a6f3072-kube-api-access-wf5sn\") pod \"redhat-operators-gqnbv\" (UID: \"7f069912-6246-42e1-8ea0-6ee03a6f3072\") " pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.328441 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f069912-6246-42e1-8ea0-6ee03a6f3072-utilities\") pod \"redhat-operators-gqnbv\" (UID: \"7f069912-6246-42e1-8ea0-6ee03a6f3072\") " pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.328539 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f069912-6246-42e1-8ea0-6ee03a6f3072-catalog-content\") pod \"redhat-operators-gqnbv\" (UID: \"7f069912-6246-42e1-8ea0-6ee03a6f3072\") " pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.328657 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf5sn\" (UniqueName: \"kubernetes.io/projected/7f069912-6246-42e1-8ea0-6ee03a6f3072-kube-api-access-wf5sn\") pod \"redhat-operators-gqnbv\" (UID: \"7f069912-6246-42e1-8ea0-6ee03a6f3072\") " pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.329188 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f069912-6246-42e1-8ea0-6ee03a6f3072-utilities\") pod \"redhat-operators-gqnbv\" (UID: \"7f069912-6246-42e1-8ea0-6ee03a6f3072\") " pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.329251 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f069912-6246-42e1-8ea0-6ee03a6f3072-catalog-content\") pod \"redhat-operators-gqnbv\" (UID: \"7f069912-6246-42e1-8ea0-6ee03a6f3072\") " pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.349126 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf5sn\" (UniqueName: \"kubernetes.io/projected/7f069912-6246-42e1-8ea0-6ee03a6f3072-kube-api-access-wf5sn\") pod \"redhat-operators-gqnbv\" (UID: \"7f069912-6246-42e1-8ea0-6ee03a6f3072\") " pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:26 crc kubenswrapper[4832]: I0131 05:18:26.483275 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:27 crc kubenswrapper[4832]: W0131 05:18:27.013577 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f069912_6246_42e1_8ea0_6ee03a6f3072.slice/crio-b2483b2e49882dc41cbfa5f4cc04f8e5193d4a806d10cd1c78d481d40c71e82c WatchSource:0}: Error finding container b2483b2e49882dc41cbfa5f4cc04f8e5193d4a806d10cd1c78d481d40c71e82c: Status 404 returned error can't find the container with id b2483b2e49882dc41cbfa5f4cc04f8e5193d4a806d10cd1c78d481d40c71e82c Jan 31 05:18:27 crc kubenswrapper[4832]: I0131 05:18:27.019867 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqnbv"] Jan 31 05:18:27 crc kubenswrapper[4832]: I0131 05:18:27.614851 4832 generic.go:334] "Generic (PLEG): container finished" podID="7f069912-6246-42e1-8ea0-6ee03a6f3072" containerID="6e80a9ce56ce0af59118825ace4430133d7f0858a287b237961f8fd2a8eed143" exitCode=0 Jan 31 05:18:27 crc kubenswrapper[4832]: I0131 05:18:27.615044 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqnbv" event={"ID":"7f069912-6246-42e1-8ea0-6ee03a6f3072","Type":"ContainerDied","Data":"6e80a9ce56ce0af59118825ace4430133d7f0858a287b237961f8fd2a8eed143"} Jan 31 05:18:27 crc kubenswrapper[4832]: I0131 05:18:27.615116 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqnbv" event={"ID":"7f069912-6246-42e1-8ea0-6ee03a6f3072","Type":"ContainerStarted","Data":"b2483b2e49882dc41cbfa5f4cc04f8e5193d4a806d10cd1c78d481d40c71e82c"} Jan 31 05:18:29 crc kubenswrapper[4832]: I0131 05:18:29.639785 4832 generic.go:334] "Generic (PLEG): container finished" podID="7f069912-6246-42e1-8ea0-6ee03a6f3072" containerID="6cd4d2ec993b6577f37268696a6c4daa8865ba21d8142f62dfc115fdd2149aaa" exitCode=0 Jan 31 05:18:29 crc kubenswrapper[4832]: I0131 05:18:29.639852 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqnbv" event={"ID":"7f069912-6246-42e1-8ea0-6ee03a6f3072","Type":"ContainerDied","Data":"6cd4d2ec993b6577f37268696a6c4daa8865ba21d8142f62dfc115fdd2149aaa"} Jan 31 05:18:30 crc kubenswrapper[4832]: I0131 05:18:30.658024 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqnbv" event={"ID":"7f069912-6246-42e1-8ea0-6ee03a6f3072","Type":"ContainerStarted","Data":"3403b98f6f514af2fbbb1070d7dbc5cfd78590b709a3d2b51cb54eebde816df6"} Jan 31 05:18:30 crc kubenswrapper[4832]: I0131 05:18:30.683328 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gqnbv" podStartSLOduration=2.2604194890000002 podStartE2EDuration="4.683295461s" podCreationTimestamp="2026-01-31 05:18:26 +0000 UTC" firstStartedPulling="2026-01-31 05:18:27.61676554 +0000 UTC m=+2116.565587225" lastFinishedPulling="2026-01-31 05:18:30.039641512 +0000 UTC m=+2118.988463197" observedRunningTime="2026-01-31 05:18:30.681146564 +0000 UTC m=+2119.629968249" watchObservedRunningTime="2026-01-31 05:18:30.683295461 +0000 UTC m=+2119.632117166" Jan 31 05:18:36 crc kubenswrapper[4832]: I0131 05:18:36.483665 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:36 crc kubenswrapper[4832]: I0131 05:18:36.484339 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:37 crc kubenswrapper[4832]: I0131 05:18:37.553099 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gqnbv" podUID="7f069912-6246-42e1-8ea0-6ee03a6f3072" containerName="registry-server" probeResult="failure" output=< Jan 31 05:18:37 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 31 05:18:37 crc kubenswrapper[4832]: > Jan 31 05:18:46 crc kubenswrapper[4832]: I0131 05:18:46.541395 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:46 crc kubenswrapper[4832]: I0131 05:18:46.594705 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:46 crc kubenswrapper[4832]: I0131 05:18:46.780537 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqnbv"] Jan 31 05:18:47 crc kubenswrapper[4832]: I0131 05:18:47.836725 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gqnbv" podUID="7f069912-6246-42e1-8ea0-6ee03a6f3072" containerName="registry-server" containerID="cri-o://3403b98f6f514af2fbbb1070d7dbc5cfd78590b709a3d2b51cb54eebde816df6" gracePeriod=2 Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.336156 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.361403 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f069912-6246-42e1-8ea0-6ee03a6f3072-utilities\") pod \"7f069912-6246-42e1-8ea0-6ee03a6f3072\" (UID: \"7f069912-6246-42e1-8ea0-6ee03a6f3072\") " Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.361672 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf5sn\" (UniqueName: \"kubernetes.io/projected/7f069912-6246-42e1-8ea0-6ee03a6f3072-kube-api-access-wf5sn\") pod \"7f069912-6246-42e1-8ea0-6ee03a6f3072\" (UID: \"7f069912-6246-42e1-8ea0-6ee03a6f3072\") " Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.361742 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f069912-6246-42e1-8ea0-6ee03a6f3072-catalog-content\") pod \"7f069912-6246-42e1-8ea0-6ee03a6f3072\" (UID: \"7f069912-6246-42e1-8ea0-6ee03a6f3072\") " Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.362498 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f069912-6246-42e1-8ea0-6ee03a6f3072-utilities" (OuterVolumeSpecName: "utilities") pod "7f069912-6246-42e1-8ea0-6ee03a6f3072" (UID: "7f069912-6246-42e1-8ea0-6ee03a6f3072"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.368419 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f069912-6246-42e1-8ea0-6ee03a6f3072-kube-api-access-wf5sn" (OuterVolumeSpecName: "kube-api-access-wf5sn") pod "7f069912-6246-42e1-8ea0-6ee03a6f3072" (UID: "7f069912-6246-42e1-8ea0-6ee03a6f3072"). InnerVolumeSpecName "kube-api-access-wf5sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.468375 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f069912-6246-42e1-8ea0-6ee03a6f3072-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.468414 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf5sn\" (UniqueName: \"kubernetes.io/projected/7f069912-6246-42e1-8ea0-6ee03a6f3072-kube-api-access-wf5sn\") on node \"crc\" DevicePath \"\"" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.499395 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f069912-6246-42e1-8ea0-6ee03a6f3072-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f069912-6246-42e1-8ea0-6ee03a6f3072" (UID: "7f069912-6246-42e1-8ea0-6ee03a6f3072"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.570752 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f069912-6246-42e1-8ea0-6ee03a6f3072-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.849719 4832 generic.go:334] "Generic (PLEG): container finished" podID="7f069912-6246-42e1-8ea0-6ee03a6f3072" containerID="3403b98f6f514af2fbbb1070d7dbc5cfd78590b709a3d2b51cb54eebde816df6" exitCode=0 Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.849810 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqnbv" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.849825 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqnbv" event={"ID":"7f069912-6246-42e1-8ea0-6ee03a6f3072","Type":"ContainerDied","Data":"3403b98f6f514af2fbbb1070d7dbc5cfd78590b709a3d2b51cb54eebde816df6"} Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.850376 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqnbv" event={"ID":"7f069912-6246-42e1-8ea0-6ee03a6f3072","Type":"ContainerDied","Data":"b2483b2e49882dc41cbfa5f4cc04f8e5193d4a806d10cd1c78d481d40c71e82c"} Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.850401 4832 scope.go:117] "RemoveContainer" containerID="3403b98f6f514af2fbbb1070d7dbc5cfd78590b709a3d2b51cb54eebde816df6" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.882973 4832 scope.go:117] "RemoveContainer" containerID="6cd4d2ec993b6577f37268696a6c4daa8865ba21d8142f62dfc115fdd2149aaa" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.897750 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqnbv"] Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.908892 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gqnbv"] Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.918688 4832 scope.go:117] "RemoveContainer" containerID="6e80a9ce56ce0af59118825ace4430133d7f0858a287b237961f8fd2a8eed143" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.964934 4832 scope.go:117] "RemoveContainer" containerID="3403b98f6f514af2fbbb1070d7dbc5cfd78590b709a3d2b51cb54eebde816df6" Jan 31 05:18:48 crc kubenswrapper[4832]: E0131 05:18:48.965730 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3403b98f6f514af2fbbb1070d7dbc5cfd78590b709a3d2b51cb54eebde816df6\": container with ID starting with 3403b98f6f514af2fbbb1070d7dbc5cfd78590b709a3d2b51cb54eebde816df6 not found: ID does not exist" containerID="3403b98f6f514af2fbbb1070d7dbc5cfd78590b709a3d2b51cb54eebde816df6" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.965813 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3403b98f6f514af2fbbb1070d7dbc5cfd78590b709a3d2b51cb54eebde816df6"} err="failed to get container status \"3403b98f6f514af2fbbb1070d7dbc5cfd78590b709a3d2b51cb54eebde816df6\": rpc error: code = NotFound desc = could not find container \"3403b98f6f514af2fbbb1070d7dbc5cfd78590b709a3d2b51cb54eebde816df6\": container with ID starting with 3403b98f6f514af2fbbb1070d7dbc5cfd78590b709a3d2b51cb54eebde816df6 not found: ID does not exist" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.965860 4832 scope.go:117] "RemoveContainer" containerID="6cd4d2ec993b6577f37268696a6c4daa8865ba21d8142f62dfc115fdd2149aaa" Jan 31 05:18:48 crc kubenswrapper[4832]: E0131 05:18:48.966399 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd4d2ec993b6577f37268696a6c4daa8865ba21d8142f62dfc115fdd2149aaa\": container with ID starting with 6cd4d2ec993b6577f37268696a6c4daa8865ba21d8142f62dfc115fdd2149aaa not found: ID does not exist" containerID="6cd4d2ec993b6577f37268696a6c4daa8865ba21d8142f62dfc115fdd2149aaa" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.966433 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd4d2ec993b6577f37268696a6c4daa8865ba21d8142f62dfc115fdd2149aaa"} err="failed to get container status \"6cd4d2ec993b6577f37268696a6c4daa8865ba21d8142f62dfc115fdd2149aaa\": rpc error: code = NotFound desc = could not find container \"6cd4d2ec993b6577f37268696a6c4daa8865ba21d8142f62dfc115fdd2149aaa\": container with ID starting with 6cd4d2ec993b6577f37268696a6c4daa8865ba21d8142f62dfc115fdd2149aaa not found: ID does not exist" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.966465 4832 scope.go:117] "RemoveContainer" containerID="6e80a9ce56ce0af59118825ace4430133d7f0858a287b237961f8fd2a8eed143" Jan 31 05:18:48 crc kubenswrapper[4832]: E0131 05:18:48.966788 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e80a9ce56ce0af59118825ace4430133d7f0858a287b237961f8fd2a8eed143\": container with ID starting with 6e80a9ce56ce0af59118825ace4430133d7f0858a287b237961f8fd2a8eed143 not found: ID does not exist" containerID="6e80a9ce56ce0af59118825ace4430133d7f0858a287b237961f8fd2a8eed143" Jan 31 05:18:48 crc kubenswrapper[4832]: I0131 05:18:48.966835 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e80a9ce56ce0af59118825ace4430133d7f0858a287b237961f8fd2a8eed143"} err="failed to get container status \"6e80a9ce56ce0af59118825ace4430133d7f0858a287b237961f8fd2a8eed143\": rpc error: code = NotFound desc = could not find container \"6e80a9ce56ce0af59118825ace4430133d7f0858a287b237961f8fd2a8eed143\": container with ID starting with 6e80a9ce56ce0af59118825ace4430133d7f0858a287b237961f8fd2a8eed143 not found: ID does not exist" Jan 31 05:18:49 crc kubenswrapper[4832]: I0131 05:18:49.881877 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f069912-6246-42e1-8ea0-6ee03a6f3072" path="/var/lib/kubelet/pods/7f069912-6246-42e1-8ea0-6ee03a6f3072/volumes" Jan 31 05:19:18 crc kubenswrapper[4832]: I0131 05:19:18.540417 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:19:18 crc kubenswrapper[4832]: I0131 05:19:18.541342 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:19:48 crc kubenswrapper[4832]: I0131 05:19:48.540234 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:19:48 crc kubenswrapper[4832]: I0131 05:19:48.540692 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.677513 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6zxq4"] Jan 31 05:19:56 crc kubenswrapper[4832]: E0131 05:19:56.678355 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f069912-6246-42e1-8ea0-6ee03a6f3072" containerName="extract-utilities" Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.678368 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f069912-6246-42e1-8ea0-6ee03a6f3072" containerName="extract-utilities" Jan 31 05:19:56 crc kubenswrapper[4832]: E0131 05:19:56.678394 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f069912-6246-42e1-8ea0-6ee03a6f3072" containerName="extract-content" Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.678400 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f069912-6246-42e1-8ea0-6ee03a6f3072" containerName="extract-content" Jan 31 05:19:56 crc kubenswrapper[4832]: E0131 05:19:56.678428 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f069912-6246-42e1-8ea0-6ee03a6f3072" containerName="registry-server" Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.678434 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f069912-6246-42e1-8ea0-6ee03a6f3072" containerName="registry-server" Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.678617 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f069912-6246-42e1-8ea0-6ee03a6f3072" containerName="registry-server" Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.680032 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.687103 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6zxq4"] Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.867137 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d327bbf-f08c-438f-8bc1-0f2952a63fac-catalog-content\") pod \"redhat-marketplace-6zxq4\" (UID: \"3d327bbf-f08c-438f-8bc1-0f2952a63fac\") " pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.867186 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d327bbf-f08c-438f-8bc1-0f2952a63fac-utilities\") pod \"redhat-marketplace-6zxq4\" (UID: \"3d327bbf-f08c-438f-8bc1-0f2952a63fac\") " pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.867202 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skhlb\" (UniqueName: \"kubernetes.io/projected/3d327bbf-f08c-438f-8bc1-0f2952a63fac-kube-api-access-skhlb\") pod \"redhat-marketplace-6zxq4\" (UID: \"3d327bbf-f08c-438f-8bc1-0f2952a63fac\") " pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.969792 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d327bbf-f08c-438f-8bc1-0f2952a63fac-catalog-content\") pod \"redhat-marketplace-6zxq4\" (UID: \"3d327bbf-f08c-438f-8bc1-0f2952a63fac\") " pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.969867 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d327bbf-f08c-438f-8bc1-0f2952a63fac-utilities\") pod \"redhat-marketplace-6zxq4\" (UID: \"3d327bbf-f08c-438f-8bc1-0f2952a63fac\") " pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.969892 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skhlb\" (UniqueName: \"kubernetes.io/projected/3d327bbf-f08c-438f-8bc1-0f2952a63fac-kube-api-access-skhlb\") pod \"redhat-marketplace-6zxq4\" (UID: \"3d327bbf-f08c-438f-8bc1-0f2952a63fac\") " pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.970359 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d327bbf-f08c-438f-8bc1-0f2952a63fac-catalog-content\") pod \"redhat-marketplace-6zxq4\" (UID: \"3d327bbf-f08c-438f-8bc1-0f2952a63fac\") " pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.970676 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d327bbf-f08c-438f-8bc1-0f2952a63fac-utilities\") pod \"redhat-marketplace-6zxq4\" (UID: \"3d327bbf-f08c-438f-8bc1-0f2952a63fac\") " pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:19:56 crc kubenswrapper[4832]: I0131 05:19:56.992983 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skhlb\" (UniqueName: \"kubernetes.io/projected/3d327bbf-f08c-438f-8bc1-0f2952a63fac-kube-api-access-skhlb\") pod \"redhat-marketplace-6zxq4\" (UID: \"3d327bbf-f08c-438f-8bc1-0f2952a63fac\") " pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:19:57 crc kubenswrapper[4832]: I0131 05:19:57.009134 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:19:57 crc kubenswrapper[4832]: I0131 05:19:57.478258 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6zxq4"] Jan 31 05:19:57 crc kubenswrapper[4832]: I0131 05:19:57.658426 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zxq4" event={"ID":"3d327bbf-f08c-438f-8bc1-0f2952a63fac","Type":"ContainerStarted","Data":"2fd04228b2fd5cc6a73652db51ad84d764617a691ed8d9505e47b9e7844632ac"} Jan 31 05:19:58 crc kubenswrapper[4832]: I0131 05:19:58.669725 4832 generic.go:334] "Generic (PLEG): container finished" podID="3d327bbf-f08c-438f-8bc1-0f2952a63fac" containerID="1075e14ed00a81f200aee576da93442689710604cf0acd2973a8a319895f45ae" exitCode=0 Jan 31 05:19:58 crc kubenswrapper[4832]: I0131 05:19:58.670038 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zxq4" event={"ID":"3d327bbf-f08c-438f-8bc1-0f2952a63fac","Type":"ContainerDied","Data":"1075e14ed00a81f200aee576da93442689710604cf0acd2973a8a319895f45ae"} Jan 31 05:19:59 crc kubenswrapper[4832]: I0131 05:19:59.687474 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zxq4" event={"ID":"3d327bbf-f08c-438f-8bc1-0f2952a63fac","Type":"ContainerStarted","Data":"35f5c973d41c78b652d69183b3dd861be14d560cfe69549b5775c2c61c17e36a"} Jan 31 05:20:00 crc kubenswrapper[4832]: I0131 05:20:00.700170 4832 generic.go:334] "Generic (PLEG): container finished" podID="3d327bbf-f08c-438f-8bc1-0f2952a63fac" containerID="35f5c973d41c78b652d69183b3dd861be14d560cfe69549b5775c2c61c17e36a" exitCode=0 Jan 31 05:20:00 crc kubenswrapper[4832]: I0131 05:20:00.700223 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zxq4" event={"ID":"3d327bbf-f08c-438f-8bc1-0f2952a63fac","Type":"ContainerDied","Data":"35f5c973d41c78b652d69183b3dd861be14d560cfe69549b5775c2c61c17e36a"} Jan 31 05:20:00 crc kubenswrapper[4832]: I0131 05:20:00.700486 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zxq4" event={"ID":"3d327bbf-f08c-438f-8bc1-0f2952a63fac","Type":"ContainerStarted","Data":"902bcac0ae6df8705a7075162e71624e4870eb16ed547974367ea7d3d56ba76d"} Jan 31 05:20:00 crc kubenswrapper[4832]: I0131 05:20:00.735467 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6zxq4" podStartSLOduration=3.333603022 podStartE2EDuration="4.735438811s" podCreationTimestamp="2026-01-31 05:19:56 +0000 UTC" firstStartedPulling="2026-01-31 05:19:58.672245525 +0000 UTC m=+2207.621067210" lastFinishedPulling="2026-01-31 05:20:00.074081314 +0000 UTC m=+2209.022902999" observedRunningTime="2026-01-31 05:20:00.727122883 +0000 UTC m=+2209.675944568" watchObservedRunningTime="2026-01-31 05:20:00.735438811 +0000 UTC m=+2209.684260516" Jan 31 05:20:07 crc kubenswrapper[4832]: I0131 05:20:07.010025 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:20:07 crc kubenswrapper[4832]: I0131 05:20:07.011039 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:20:07 crc kubenswrapper[4832]: I0131 05:20:07.088135 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:20:07 crc kubenswrapper[4832]: I0131 05:20:07.904689 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:20:07 crc kubenswrapper[4832]: I0131 05:20:07.964972 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6zxq4"] Jan 31 05:20:09 crc kubenswrapper[4832]: I0131 05:20:09.815577 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6zxq4" podUID="3d327bbf-f08c-438f-8bc1-0f2952a63fac" containerName="registry-server" containerID="cri-o://902bcac0ae6df8705a7075162e71624e4870eb16ed547974367ea7d3d56ba76d" gracePeriod=2 Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.315179 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.477094 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d327bbf-f08c-438f-8bc1-0f2952a63fac-utilities\") pod \"3d327bbf-f08c-438f-8bc1-0f2952a63fac\" (UID: \"3d327bbf-f08c-438f-8bc1-0f2952a63fac\") " Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.477412 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skhlb\" (UniqueName: \"kubernetes.io/projected/3d327bbf-f08c-438f-8bc1-0f2952a63fac-kube-api-access-skhlb\") pod \"3d327bbf-f08c-438f-8bc1-0f2952a63fac\" (UID: \"3d327bbf-f08c-438f-8bc1-0f2952a63fac\") " Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.477438 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d327bbf-f08c-438f-8bc1-0f2952a63fac-catalog-content\") pod \"3d327bbf-f08c-438f-8bc1-0f2952a63fac\" (UID: \"3d327bbf-f08c-438f-8bc1-0f2952a63fac\") " Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.478063 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d327bbf-f08c-438f-8bc1-0f2952a63fac-utilities" (OuterVolumeSpecName: "utilities") pod "3d327bbf-f08c-438f-8bc1-0f2952a63fac" (UID: "3d327bbf-f08c-438f-8bc1-0f2952a63fac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.486726 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d327bbf-f08c-438f-8bc1-0f2952a63fac-kube-api-access-skhlb" (OuterVolumeSpecName: "kube-api-access-skhlb") pod "3d327bbf-f08c-438f-8bc1-0f2952a63fac" (UID: "3d327bbf-f08c-438f-8bc1-0f2952a63fac"). InnerVolumeSpecName "kube-api-access-skhlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.502627 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d327bbf-f08c-438f-8bc1-0f2952a63fac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d327bbf-f08c-438f-8bc1-0f2952a63fac" (UID: "3d327bbf-f08c-438f-8bc1-0f2952a63fac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.579815 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skhlb\" (UniqueName: \"kubernetes.io/projected/3d327bbf-f08c-438f-8bc1-0f2952a63fac-kube-api-access-skhlb\") on node \"crc\" DevicePath \"\"" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.579843 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d327bbf-f08c-438f-8bc1-0f2952a63fac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.579852 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d327bbf-f08c-438f-8bc1-0f2952a63fac-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.833307 4832 generic.go:334] "Generic (PLEG): container finished" podID="3d327bbf-f08c-438f-8bc1-0f2952a63fac" containerID="902bcac0ae6df8705a7075162e71624e4870eb16ed547974367ea7d3d56ba76d" exitCode=0 Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.833380 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zxq4" event={"ID":"3d327bbf-f08c-438f-8bc1-0f2952a63fac","Type":"ContainerDied","Data":"902bcac0ae6df8705a7075162e71624e4870eb16ed547974367ea7d3d56ba76d"} Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.833429 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6zxq4" event={"ID":"3d327bbf-f08c-438f-8bc1-0f2952a63fac","Type":"ContainerDied","Data":"2fd04228b2fd5cc6a73652db51ad84d764617a691ed8d9505e47b9e7844632ac"} Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.833466 4832 scope.go:117] "RemoveContainer" containerID="902bcac0ae6df8705a7075162e71624e4870eb16ed547974367ea7d3d56ba76d" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.833461 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6zxq4" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.871885 4832 scope.go:117] "RemoveContainer" containerID="35f5c973d41c78b652d69183b3dd861be14d560cfe69549b5775c2c61c17e36a" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.888511 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6zxq4"] Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.897330 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6zxq4"] Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.921331 4832 scope.go:117] "RemoveContainer" containerID="1075e14ed00a81f200aee576da93442689710604cf0acd2973a8a319895f45ae" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.976123 4832 scope.go:117] "RemoveContainer" containerID="902bcac0ae6df8705a7075162e71624e4870eb16ed547974367ea7d3d56ba76d" Jan 31 05:20:10 crc kubenswrapper[4832]: E0131 05:20:10.977242 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"902bcac0ae6df8705a7075162e71624e4870eb16ed547974367ea7d3d56ba76d\": container with ID starting with 902bcac0ae6df8705a7075162e71624e4870eb16ed547974367ea7d3d56ba76d not found: ID does not exist" containerID="902bcac0ae6df8705a7075162e71624e4870eb16ed547974367ea7d3d56ba76d" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.977284 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"902bcac0ae6df8705a7075162e71624e4870eb16ed547974367ea7d3d56ba76d"} err="failed to get container status \"902bcac0ae6df8705a7075162e71624e4870eb16ed547974367ea7d3d56ba76d\": rpc error: code = NotFound desc = could not find container \"902bcac0ae6df8705a7075162e71624e4870eb16ed547974367ea7d3d56ba76d\": container with ID starting with 902bcac0ae6df8705a7075162e71624e4870eb16ed547974367ea7d3d56ba76d not found: ID does not exist" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.977310 4832 scope.go:117] "RemoveContainer" containerID="35f5c973d41c78b652d69183b3dd861be14d560cfe69549b5775c2c61c17e36a" Jan 31 05:20:10 crc kubenswrapper[4832]: E0131 05:20:10.977786 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f5c973d41c78b652d69183b3dd861be14d560cfe69549b5775c2c61c17e36a\": container with ID starting with 35f5c973d41c78b652d69183b3dd861be14d560cfe69549b5775c2c61c17e36a not found: ID does not exist" containerID="35f5c973d41c78b652d69183b3dd861be14d560cfe69549b5775c2c61c17e36a" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.977822 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f5c973d41c78b652d69183b3dd861be14d560cfe69549b5775c2c61c17e36a"} err="failed to get container status \"35f5c973d41c78b652d69183b3dd861be14d560cfe69549b5775c2c61c17e36a\": rpc error: code = NotFound desc = could not find container \"35f5c973d41c78b652d69183b3dd861be14d560cfe69549b5775c2c61c17e36a\": container with ID starting with 35f5c973d41c78b652d69183b3dd861be14d560cfe69549b5775c2c61c17e36a not found: ID does not exist" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.977839 4832 scope.go:117] "RemoveContainer" containerID="1075e14ed00a81f200aee576da93442689710604cf0acd2973a8a319895f45ae" Jan 31 05:20:10 crc kubenswrapper[4832]: E0131 05:20:10.978102 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1075e14ed00a81f200aee576da93442689710604cf0acd2973a8a319895f45ae\": container with ID starting with 1075e14ed00a81f200aee576da93442689710604cf0acd2973a8a319895f45ae not found: ID does not exist" containerID="1075e14ed00a81f200aee576da93442689710604cf0acd2973a8a319895f45ae" Jan 31 05:20:10 crc kubenswrapper[4832]: I0131 05:20:10.978133 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1075e14ed00a81f200aee576da93442689710604cf0acd2973a8a319895f45ae"} err="failed to get container status \"1075e14ed00a81f200aee576da93442689710604cf0acd2973a8a319895f45ae\": rpc error: code = NotFound desc = could not find container \"1075e14ed00a81f200aee576da93442689710604cf0acd2973a8a319895f45ae\": container with ID starting with 1075e14ed00a81f200aee576da93442689710604cf0acd2973a8a319895f45ae not found: ID does not exist" Jan 31 05:20:11 crc kubenswrapper[4832]: I0131 05:20:11.879323 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d327bbf-f08c-438f-8bc1-0f2952a63fac" path="/var/lib/kubelet/pods/3d327bbf-f08c-438f-8bc1-0f2952a63fac/volumes" Jan 31 05:20:18 crc kubenswrapper[4832]: I0131 05:20:18.540503 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:20:18 crc kubenswrapper[4832]: I0131 05:20:18.542039 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:20:18 crc kubenswrapper[4832]: I0131 05:20:18.542367 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 05:20:18 crc kubenswrapper[4832]: I0131 05:20:18.543145 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:20:18 crc kubenswrapper[4832]: I0131 05:20:18.543305 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" gracePeriod=600 Jan 31 05:20:18 crc kubenswrapper[4832]: E0131 05:20:18.675127 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:20:18 crc kubenswrapper[4832]: I0131 05:20:18.923304 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" exitCode=0 Jan 31 05:20:18 crc kubenswrapper[4832]: I0131 05:20:18.923349 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70"} Jan 31 05:20:18 crc kubenswrapper[4832]: I0131 05:20:18.923406 4832 scope.go:117] "RemoveContainer" containerID="6bedd6a8a1f4f5a4ad06b8307a6d0f1f23ba9781c43cb4e926f0e3a33ef0bb53" Jan 31 05:20:18 crc kubenswrapper[4832]: I0131 05:20:18.923880 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:20:18 crc kubenswrapper[4832]: E0131 05:20:18.924155 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:20:33 crc kubenswrapper[4832]: I0131 05:20:33.860596 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:20:33 crc kubenswrapper[4832]: E0131 05:20:33.861502 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:20:44 crc kubenswrapper[4832]: I0131 05:20:44.860250 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:20:44 crc kubenswrapper[4832]: E0131 05:20:44.861496 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:20:56 crc kubenswrapper[4832]: I0131 05:20:56.859928 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:20:56 crc kubenswrapper[4832]: E0131 05:20:56.860682 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:21:08 crc kubenswrapper[4832]: I0131 05:21:08.859147 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:21:08 crc kubenswrapper[4832]: E0131 05:21:08.860124 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:21:21 crc kubenswrapper[4832]: I0131 05:21:21.590456 4832 generic.go:334] "Generic (PLEG): container finished" podID="89932c58-5727-49df-bd91-903acb18f444" containerID="aec3a46dff5f57fbe2f45f286a5dd85ce411fa1cfb19059347780380fed4c7c0" exitCode=0 Jan 31 05:21:21 crc kubenswrapper[4832]: I0131 05:21:21.590542 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" event={"ID":"89932c58-5727-49df-bd91-903acb18f444","Type":"ContainerDied","Data":"aec3a46dff5f57fbe2f45f286a5dd85ce411fa1cfb19059347780380fed4c7c0"} Jan 31 05:21:21 crc kubenswrapper[4832]: I0131 05:21:21.871905 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:21:21 crc kubenswrapper[4832]: E0131 05:21:21.872967 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.159260 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.251075 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh8lm\" (UniqueName: \"kubernetes.io/projected/89932c58-5727-49df-bd91-903acb18f444-kube-api-access-lh8lm\") pod \"89932c58-5727-49df-bd91-903acb18f444\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.251311 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-libvirt-secret-0\") pod \"89932c58-5727-49df-bd91-903acb18f444\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.251418 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-ssh-key-openstack-edpm-ipam\") pod \"89932c58-5727-49df-bd91-903acb18f444\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.251465 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-inventory\") pod \"89932c58-5727-49df-bd91-903acb18f444\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.251551 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-libvirt-combined-ca-bundle\") pod \"89932c58-5727-49df-bd91-903acb18f444\" (UID: \"89932c58-5727-49df-bd91-903acb18f444\") " Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.261695 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "89932c58-5727-49df-bd91-903acb18f444" (UID: "89932c58-5727-49df-bd91-903acb18f444"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.296851 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89932c58-5727-49df-bd91-903acb18f444-kube-api-access-lh8lm" (OuterVolumeSpecName: "kube-api-access-lh8lm") pod "89932c58-5727-49df-bd91-903acb18f444" (UID: "89932c58-5727-49df-bd91-903acb18f444"). InnerVolumeSpecName "kube-api-access-lh8lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.297675 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-inventory" (OuterVolumeSpecName: "inventory") pod "89932c58-5727-49df-bd91-903acb18f444" (UID: "89932c58-5727-49df-bd91-903acb18f444"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.344730 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "89932c58-5727-49df-bd91-903acb18f444" (UID: "89932c58-5727-49df-bd91-903acb18f444"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.355375 4832 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.355422 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh8lm\" (UniqueName: \"kubernetes.io/projected/89932c58-5727-49df-bd91-903acb18f444-kube-api-access-lh8lm\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.355645 4832 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.355656 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.377267 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "89932c58-5727-49df-bd91-903acb18f444" (UID: "89932c58-5727-49df-bd91-903acb18f444"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.457967 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89932c58-5727-49df-bd91-903acb18f444-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.613363 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" event={"ID":"89932c58-5727-49df-bd91-903acb18f444","Type":"ContainerDied","Data":"c610610e5ddcc28e7fcc8ac06c9f15240f733670c57981d14a1baa2a02192b9a"} Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.613397 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c610610e5ddcc28e7fcc8ac06c9f15240f733670c57981d14a1baa2a02192b9a" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.613462 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qppj" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.731704 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr"] Jan 31 05:21:23 crc kubenswrapper[4832]: E0131 05:21:23.732092 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d327bbf-f08c-438f-8bc1-0f2952a63fac" containerName="extract-content" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.732107 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d327bbf-f08c-438f-8bc1-0f2952a63fac" containerName="extract-content" Jan 31 05:21:23 crc kubenswrapper[4832]: E0131 05:21:23.732132 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89932c58-5727-49df-bd91-903acb18f444" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.732140 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="89932c58-5727-49df-bd91-903acb18f444" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 05:21:23 crc kubenswrapper[4832]: E0131 05:21:23.732151 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d327bbf-f08c-438f-8bc1-0f2952a63fac" containerName="registry-server" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.732157 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d327bbf-f08c-438f-8bc1-0f2952a63fac" containerName="registry-server" Jan 31 05:21:23 crc kubenswrapper[4832]: E0131 05:21:23.732169 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d327bbf-f08c-438f-8bc1-0f2952a63fac" containerName="extract-utilities" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.732175 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d327bbf-f08c-438f-8bc1-0f2952a63fac" containerName="extract-utilities" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.732333 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="89932c58-5727-49df-bd91-903acb18f444" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.732344 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d327bbf-f08c-438f-8bc1-0f2952a63fac" containerName="registry-server" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.733147 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.737479 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.737545 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.737551 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.737948 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.738068 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.738923 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.739094 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.761148 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr"] Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.868780 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.868878 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.868910 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.869013 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.869033 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.869053 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.869197 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.869271 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.869389 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skh4t\" (UniqueName: \"kubernetes.io/projected/3b3b6eae-8f54-4057-b9c8-74f27b762ada-kube-api-access-skh4t\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.971809 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skh4t\" (UniqueName: \"kubernetes.io/projected/3b3b6eae-8f54-4057-b9c8-74f27b762ada-kube-api-access-skh4t\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.971883 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.972057 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.972110 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.972263 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.972304 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.972341 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.972391 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.972453 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.973386 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.977153 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.978503 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.979254 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.979419 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.979839 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.980268 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.981747 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:23 crc kubenswrapper[4832]: I0131 05:21:23.996588 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skh4t\" (UniqueName: \"kubernetes.io/projected/3b3b6eae-8f54-4057-b9c8-74f27b762ada-kube-api-access-skh4t\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9rvr\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:24 crc kubenswrapper[4832]: I0131 05:21:24.057763 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:21:24 crc kubenswrapper[4832]: I0131 05:21:24.468162 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr"] Jan 31 05:21:24 crc kubenswrapper[4832]: I0131 05:21:24.624574 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" event={"ID":"3b3b6eae-8f54-4057-b9c8-74f27b762ada","Type":"ContainerStarted","Data":"b7117c478d2945f387ab7532bbf4f0c841992e5b5313da43d7af85ead6282586"} Jan 31 05:21:25 crc kubenswrapper[4832]: I0131 05:21:25.633438 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" event={"ID":"3b3b6eae-8f54-4057-b9c8-74f27b762ada","Type":"ContainerStarted","Data":"c28a990b2cf306fc0f9cbcd078661981ffdfc823923a3a270486d4df4c6cbbc5"} Jan 31 05:21:33 crc kubenswrapper[4832]: I0131 05:21:33.860193 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:21:33 crc kubenswrapper[4832]: E0131 05:21:33.861435 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:21:48 crc kubenswrapper[4832]: I0131 05:21:48.860127 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:21:48 crc kubenswrapper[4832]: E0131 05:21:48.861335 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:22:01 crc kubenswrapper[4832]: I0131 05:22:01.870232 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:22:01 crc kubenswrapper[4832]: E0131 05:22:01.870929 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:22:14 crc kubenswrapper[4832]: I0131 05:22:14.860492 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:22:14 crc kubenswrapper[4832]: E0131 05:22:14.861680 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:22:28 crc kubenswrapper[4832]: I0131 05:22:28.859700 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:22:28 crc kubenswrapper[4832]: E0131 05:22:28.860850 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:22:42 crc kubenswrapper[4832]: I0131 05:22:42.861064 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:22:42 crc kubenswrapper[4832]: E0131 05:22:42.862341 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:22:56 crc kubenswrapper[4832]: I0131 05:22:56.860545 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:22:56 crc kubenswrapper[4832]: E0131 05:22:56.861786 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:23:09 crc kubenswrapper[4832]: I0131 05:23:09.859744 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:23:09 crc kubenswrapper[4832]: E0131 05:23:09.860579 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:23:20 crc kubenswrapper[4832]: I0131 05:23:20.860817 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:23:20 crc kubenswrapper[4832]: E0131 05:23:20.862135 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:23:32 crc kubenswrapper[4832]: I0131 05:23:32.860031 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:23:32 crc kubenswrapper[4832]: E0131 05:23:32.861697 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:23:42 crc kubenswrapper[4832]: I0131 05:23:42.253843 4832 generic.go:334] "Generic (PLEG): container finished" podID="3b3b6eae-8f54-4057-b9c8-74f27b762ada" containerID="c28a990b2cf306fc0f9cbcd078661981ffdfc823923a3a270486d4df4c6cbbc5" exitCode=0 Jan 31 05:23:42 crc kubenswrapper[4832]: I0131 05:23:42.253950 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" event={"ID":"3b3b6eae-8f54-4057-b9c8-74f27b762ada","Type":"ContainerDied","Data":"c28a990b2cf306fc0f9cbcd078661981ffdfc823923a3a270486d4df4c6cbbc5"} Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.747035 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.879195 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-cell1-compute-config-1\") pod \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.879301 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-cell1-compute-config-0\") pod \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.879344 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skh4t\" (UniqueName: \"kubernetes.io/projected/3b3b6eae-8f54-4057-b9c8-74f27b762ada-kube-api-access-skh4t\") pod \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.879388 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-ssh-key-openstack-edpm-ipam\") pod \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.879459 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-migration-ssh-key-1\") pod \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.879629 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-inventory\") pod \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.879734 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-extra-config-0\") pod \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.879778 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-combined-ca-bundle\") pod \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.879835 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-migration-ssh-key-0\") pod \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\" (UID: \"3b3b6eae-8f54-4057-b9c8-74f27b762ada\") " Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.885011 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "3b3b6eae-8f54-4057-b9c8-74f27b762ada" (UID: "3b3b6eae-8f54-4057-b9c8-74f27b762ada"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.891690 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3b6eae-8f54-4057-b9c8-74f27b762ada-kube-api-access-skh4t" (OuterVolumeSpecName: "kube-api-access-skh4t") pod "3b3b6eae-8f54-4057-b9c8-74f27b762ada" (UID: "3b3b6eae-8f54-4057-b9c8-74f27b762ada"). InnerVolumeSpecName "kube-api-access-skh4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.906136 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-inventory" (OuterVolumeSpecName: "inventory") pod "3b3b6eae-8f54-4057-b9c8-74f27b762ada" (UID: "3b3b6eae-8f54-4057-b9c8-74f27b762ada"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.908163 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "3b3b6eae-8f54-4057-b9c8-74f27b762ada" (UID: "3b3b6eae-8f54-4057-b9c8-74f27b762ada"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.908491 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "3b3b6eae-8f54-4057-b9c8-74f27b762ada" (UID: "3b3b6eae-8f54-4057-b9c8-74f27b762ada"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.911253 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b3b6eae-8f54-4057-b9c8-74f27b762ada" (UID: "3b3b6eae-8f54-4057-b9c8-74f27b762ada"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.912531 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "3b3b6eae-8f54-4057-b9c8-74f27b762ada" (UID: "3b3b6eae-8f54-4057-b9c8-74f27b762ada"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.913987 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "3b3b6eae-8f54-4057-b9c8-74f27b762ada" (UID: "3b3b6eae-8f54-4057-b9c8-74f27b762ada"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.916180 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "3b3b6eae-8f54-4057-b9c8-74f27b762ada" (UID: "3b3b6eae-8f54-4057-b9c8-74f27b762ada"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.982756 4832 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.982786 4832 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.982820 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skh4t\" (UniqueName: \"kubernetes.io/projected/3b3b6eae-8f54-4057-b9c8-74f27b762ada-kube-api-access-skh4t\") on node \"crc\" DevicePath \"\"" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.982830 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.982839 4832 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.982848 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.982858 4832 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.982868 4832 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:23:43 crc kubenswrapper[4832]: I0131 05:23:43.982879 4832 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3b3b6eae-8f54-4057-b9c8-74f27b762ada-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.277898 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" event={"ID":"3b3b6eae-8f54-4057-b9c8-74f27b762ada","Type":"ContainerDied","Data":"b7117c478d2945f387ab7532bbf4f0c841992e5b5313da43d7af85ead6282586"} Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.278237 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7117c478d2945f387ab7532bbf4f0c841992e5b5313da43d7af85ead6282586" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.277946 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9rvr" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.391434 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6"] Jan 31 05:23:44 crc kubenswrapper[4832]: E0131 05:23:44.392127 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3b6eae-8f54-4057-b9c8-74f27b762ada" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.392233 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3b6eae-8f54-4057-b9c8-74f27b762ada" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.392592 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3b6eae-8f54-4057-b9c8-74f27b762ada" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.393465 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.399745 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.400488 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.400689 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.401532 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6"] Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.403089 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh5tt" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.403481 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.491644 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nwwb\" (UniqueName: \"kubernetes.io/projected/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-kube-api-access-5nwwb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.491684 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.491709 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.491790 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.491868 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.491901 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.492007 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.593320 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.593449 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nwwb\" (UniqueName: \"kubernetes.io/projected/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-kube-api-access-5nwwb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.593469 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.593492 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.593522 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.593573 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.593593 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.597516 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.597530 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.597914 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.598137 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.598754 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.599040 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.614148 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nwwb\" (UniqueName: \"kubernetes.io/projected/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-kube-api-access-5nwwb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:44 crc kubenswrapper[4832]: I0131 05:23:44.708956 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:23:45 crc kubenswrapper[4832]: I0131 05:23:45.271332 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6"] Jan 31 05:23:45 crc kubenswrapper[4832]: I0131 05:23:45.279540 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:23:45 crc kubenswrapper[4832]: I0131 05:23:45.287461 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" event={"ID":"02aa5c8f-25f9-43a0-9d6e-dd67d7348443","Type":"ContainerStarted","Data":"097361c7cb26e6afe710642352c56ad634f4e7c3cf2177777317a01cccd18a18"} Jan 31 05:23:46 crc kubenswrapper[4832]: I0131 05:23:46.306812 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" event={"ID":"02aa5c8f-25f9-43a0-9d6e-dd67d7348443","Type":"ContainerStarted","Data":"13960848efe4d4caeb398cd9bccc9a49f7b91a8e61e0e8411c1e491a875806fd"} Jan 31 05:23:46 crc kubenswrapper[4832]: I0131 05:23:46.331096 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" podStartSLOduration=1.942522911 podStartE2EDuration="2.331076112s" podCreationTimestamp="2026-01-31 05:23:44 +0000 UTC" firstStartedPulling="2026-01-31 05:23:45.279312307 +0000 UTC m=+2434.228133992" lastFinishedPulling="2026-01-31 05:23:45.667865498 +0000 UTC m=+2434.616687193" observedRunningTime="2026-01-31 05:23:46.328070578 +0000 UTC m=+2435.276892273" watchObservedRunningTime="2026-01-31 05:23:46.331076112 +0000 UTC m=+2435.279897807" Jan 31 05:23:47 crc kubenswrapper[4832]: I0131 05:23:47.860175 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:23:47 crc kubenswrapper[4832]: E0131 05:23:47.863255 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:23:59 crc kubenswrapper[4832]: I0131 05:23:59.859729 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:23:59 crc kubenswrapper[4832]: E0131 05:23:59.860814 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:24:13 crc kubenswrapper[4832]: I0131 05:24:13.859388 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:24:13 crc kubenswrapper[4832]: E0131 05:24:13.860152 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:24:28 crc kubenswrapper[4832]: I0131 05:24:28.860604 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:24:28 crc kubenswrapper[4832]: E0131 05:24:28.861813 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:24:42 crc kubenswrapper[4832]: I0131 05:24:42.860283 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:24:42 crc kubenswrapper[4832]: E0131 05:24:42.861531 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:24:53 crc kubenswrapper[4832]: I0131 05:24:53.859973 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:24:53 crc kubenswrapper[4832]: E0131 05:24:53.860713 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:25:07 crc kubenswrapper[4832]: I0131 05:25:07.859058 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:25:07 crc kubenswrapper[4832]: E0131 05:25:07.859674 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:25:20 crc kubenswrapper[4832]: I0131 05:25:20.859494 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:25:21 crc kubenswrapper[4832]: I0131 05:25:21.695555 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"c772ee0e09a64b1f8496d5e09ab6eed6cac2fb7a8d75e579d85cfcfbb8a01a58"} Jan 31 05:26:08 crc kubenswrapper[4832]: I0131 05:26:08.237139 4832 generic.go:334] "Generic (PLEG): container finished" podID="02aa5c8f-25f9-43a0-9d6e-dd67d7348443" containerID="13960848efe4d4caeb398cd9bccc9a49f7b91a8e61e0e8411c1e491a875806fd" exitCode=0 Jan 31 05:26:08 crc kubenswrapper[4832]: I0131 05:26:08.237264 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" event={"ID":"02aa5c8f-25f9-43a0-9d6e-dd67d7348443","Type":"ContainerDied","Data":"13960848efe4d4caeb398cd9bccc9a49f7b91a8e61e0e8411c1e491a875806fd"} Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.663368 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.765896 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-0\") pod \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.766081 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-1\") pod \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.766143 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-telemetry-combined-ca-bundle\") pod \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.766215 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-inventory\") pod \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.766340 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ssh-key-openstack-edpm-ipam\") pod \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.766454 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nwwb\" (UniqueName: \"kubernetes.io/projected/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-kube-api-access-5nwwb\") pod \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.766489 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-2\") pod \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\" (UID: \"02aa5c8f-25f9-43a0-9d6e-dd67d7348443\") " Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.774598 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "02aa5c8f-25f9-43a0-9d6e-dd67d7348443" (UID: "02aa5c8f-25f9-43a0-9d6e-dd67d7348443"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.774708 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-kube-api-access-5nwwb" (OuterVolumeSpecName: "kube-api-access-5nwwb") pod "02aa5c8f-25f9-43a0-9d6e-dd67d7348443" (UID: "02aa5c8f-25f9-43a0-9d6e-dd67d7348443"). InnerVolumeSpecName "kube-api-access-5nwwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.801002 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "02aa5c8f-25f9-43a0-9d6e-dd67d7348443" (UID: "02aa5c8f-25f9-43a0-9d6e-dd67d7348443"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.803273 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "02aa5c8f-25f9-43a0-9d6e-dd67d7348443" (UID: "02aa5c8f-25f9-43a0-9d6e-dd67d7348443"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.805641 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-inventory" (OuterVolumeSpecName: "inventory") pod "02aa5c8f-25f9-43a0-9d6e-dd67d7348443" (UID: "02aa5c8f-25f9-43a0-9d6e-dd67d7348443"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.814403 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "02aa5c8f-25f9-43a0-9d6e-dd67d7348443" (UID: "02aa5c8f-25f9-43a0-9d6e-dd67d7348443"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.821910 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "02aa5c8f-25f9-43a0-9d6e-dd67d7348443" (UID: "02aa5c8f-25f9-43a0-9d6e-dd67d7348443"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.870652 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.870696 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nwwb\" (UniqueName: \"kubernetes.io/projected/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-kube-api-access-5nwwb\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.870710 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.870728 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.870745 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.870761 4832 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:09 crc kubenswrapper[4832]: I0131 05:26:09.870787 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02aa5c8f-25f9-43a0-9d6e-dd67d7348443-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 05:26:10 crc kubenswrapper[4832]: I0131 05:26:10.266681 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" event={"ID":"02aa5c8f-25f9-43a0-9d6e-dd67d7348443","Type":"ContainerDied","Data":"097361c7cb26e6afe710642352c56ad634f4e7c3cf2177777317a01cccd18a18"} Jan 31 05:26:10 crc kubenswrapper[4832]: I0131 05:26:10.266729 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="097361c7cb26e6afe710642352c56ad634f4e7c3cf2177777317a01cccd18a18" Jan 31 05:26:10 crc kubenswrapper[4832]: I0131 05:26:10.266815 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6" Jan 31 05:26:46 crc kubenswrapper[4832]: I0131 05:26:46.638127 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mcpwq"] Jan 31 05:26:46 crc kubenswrapper[4832]: E0131 05:26:46.639424 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02aa5c8f-25f9-43a0-9d6e-dd67d7348443" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 31 05:26:46 crc kubenswrapper[4832]: I0131 05:26:46.639445 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="02aa5c8f-25f9-43a0-9d6e-dd67d7348443" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 31 05:26:46 crc kubenswrapper[4832]: I0131 05:26:46.639747 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="02aa5c8f-25f9-43a0-9d6e-dd67d7348443" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 31 05:26:46 crc kubenswrapper[4832]: I0131 05:26:46.641241 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:26:46 crc kubenswrapper[4832]: I0131 05:26:46.656867 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mcpwq"] Jan 31 05:26:46 crc kubenswrapper[4832]: I0131 05:26:46.792292 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-utilities\") pod \"community-operators-mcpwq\" (UID: \"1ad8206b-2dc9-4809-8edb-4afba36fd7f3\") " pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:26:46 crc kubenswrapper[4832]: I0131 05:26:46.792482 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkrvv\" (UniqueName: \"kubernetes.io/projected/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-kube-api-access-tkrvv\") pod \"community-operators-mcpwq\" (UID: \"1ad8206b-2dc9-4809-8edb-4afba36fd7f3\") " pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:26:46 crc kubenswrapper[4832]: I0131 05:26:46.792602 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-catalog-content\") pod \"community-operators-mcpwq\" (UID: \"1ad8206b-2dc9-4809-8edb-4afba36fd7f3\") " pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:26:46 crc kubenswrapper[4832]: I0131 05:26:46.894382 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-catalog-content\") pod \"community-operators-mcpwq\" (UID: \"1ad8206b-2dc9-4809-8edb-4afba36fd7f3\") " pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:26:46 crc kubenswrapper[4832]: I0131 05:26:46.894891 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-utilities\") pod \"community-operators-mcpwq\" (UID: \"1ad8206b-2dc9-4809-8edb-4afba36fd7f3\") " pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:26:46 crc kubenswrapper[4832]: I0131 05:26:46.895092 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkrvv\" (UniqueName: \"kubernetes.io/projected/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-kube-api-access-tkrvv\") pod \"community-operators-mcpwq\" (UID: \"1ad8206b-2dc9-4809-8edb-4afba36fd7f3\") " pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:26:46 crc kubenswrapper[4832]: I0131 05:26:46.895168 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-catalog-content\") pod \"community-operators-mcpwq\" (UID: \"1ad8206b-2dc9-4809-8edb-4afba36fd7f3\") " pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:26:46 crc kubenswrapper[4832]: I0131 05:26:46.895501 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-utilities\") pod \"community-operators-mcpwq\" (UID: \"1ad8206b-2dc9-4809-8edb-4afba36fd7f3\") " pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:26:46 crc kubenswrapper[4832]: I0131 05:26:46.928890 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkrvv\" (UniqueName: \"kubernetes.io/projected/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-kube-api-access-tkrvv\") pod \"community-operators-mcpwq\" (UID: \"1ad8206b-2dc9-4809-8edb-4afba36fd7f3\") " pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:26:46 crc kubenswrapper[4832]: I0131 05:26:46.972465 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:26:47 crc kubenswrapper[4832]: I0131 05:26:47.590231 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mcpwq"] Jan 31 05:26:47 crc kubenswrapper[4832]: I0131 05:26:47.714450 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcpwq" event={"ID":"1ad8206b-2dc9-4809-8edb-4afba36fd7f3","Type":"ContainerStarted","Data":"219b35b5893b2ce1f6a5c337b05749eb49b2f1ca5bd53c7ddacba9bb4cefd166"} Jan 31 05:26:48 crc kubenswrapper[4832]: I0131 05:26:48.729650 4832 generic.go:334] "Generic (PLEG): container finished" podID="1ad8206b-2dc9-4809-8edb-4afba36fd7f3" containerID="fcc8054ff0f24d50d76c328a02ed2d1225f18e9ad65f740e4d98c22cef0035a8" exitCode=0 Jan 31 05:26:48 crc kubenswrapper[4832]: I0131 05:26:48.729717 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcpwq" event={"ID":"1ad8206b-2dc9-4809-8edb-4afba36fd7f3","Type":"ContainerDied","Data":"fcc8054ff0f24d50d76c328a02ed2d1225f18e9ad65f740e4d98c22cef0035a8"} Jan 31 05:26:49 crc kubenswrapper[4832]: I0131 05:26:49.740134 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcpwq" event={"ID":"1ad8206b-2dc9-4809-8edb-4afba36fd7f3","Type":"ContainerStarted","Data":"10610b8b4b1009339c3e634f6ff650c44e13b567a2fb9b61f8254815cf8dd898"} Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.760723 4832 generic.go:334] "Generic (PLEG): container finished" podID="1ad8206b-2dc9-4809-8edb-4afba36fd7f3" containerID="10610b8b4b1009339c3e634f6ff650c44e13b567a2fb9b61f8254815cf8dd898" exitCode=0 Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.761178 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcpwq" event={"ID":"1ad8206b-2dc9-4809-8edb-4afba36fd7f3","Type":"ContainerDied","Data":"10610b8b4b1009339c3e634f6ff650c44e13b567a2fb9b61f8254815cf8dd898"} Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.789913 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.796031 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.801124 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.801755 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-schrn" Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.802216 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.802893 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.803397 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.898578 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.898633 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpr9k\" (UniqueName: \"kubernetes.io/projected/cf637281-101a-4e11-93b6-74f55d914798-kube-api-access-mpr9k\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.898664 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.898725 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.898911 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/cf637281-101a-4e11-93b6-74f55d914798-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.899205 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf637281-101a-4e11-93b6-74f55d914798-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.899282 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/cf637281-101a-4e11-93b6-74f55d914798-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.899400 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:50 crc kubenswrapper[4832]: I0131 05:26:50.899802 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf637281-101a-4e11-93b6-74f55d914798-config-data\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.002201 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.002265 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpr9k\" (UniqueName: \"kubernetes.io/projected/cf637281-101a-4e11-93b6-74f55d914798-kube-api-access-mpr9k\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.002305 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.002327 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.002368 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/cf637281-101a-4e11-93b6-74f55d914798-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.002458 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf637281-101a-4e11-93b6-74f55d914798-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.002493 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/cf637281-101a-4e11-93b6-74f55d914798-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.002550 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.002641 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf637281-101a-4e11-93b6-74f55d914798-config-data\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.003340 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/cf637281-101a-4e11-93b6-74f55d914798-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.003481 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/cf637281-101a-4e11-93b6-74f55d914798-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.003839 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf637281-101a-4e11-93b6-74f55d914798-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.003856 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.004373 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf637281-101a-4e11-93b6-74f55d914798-config-data\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.009480 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.009874 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.017012 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.022080 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpr9k\" (UniqueName: \"kubernetes.io/projected/cf637281-101a-4e11-93b6-74f55d914798-kube-api-access-mpr9k\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.031407 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.120759 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.462197 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 05:26:51 crc kubenswrapper[4832]: W0131 05:26:51.466256 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf637281_101a_4e11_93b6_74f55d914798.slice/crio-2d380936f2560191bb960c51e9e64b82105a5097f325e262f5a47620a22e0ee2 WatchSource:0}: Error finding container 2d380936f2560191bb960c51e9e64b82105a5097f325e262f5a47620a22e0ee2: Status 404 returned error can't find the container with id 2d380936f2560191bb960c51e9e64b82105a5097f325e262f5a47620a22e0ee2 Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.773750 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcpwq" event={"ID":"1ad8206b-2dc9-4809-8edb-4afba36fd7f3","Type":"ContainerStarted","Data":"d8921ced88f6f3a984aacff23f41ca35f3a55e1f062daa9cef3d921607f42ad4"} Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.774835 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"cf637281-101a-4e11-93b6-74f55d914798","Type":"ContainerStarted","Data":"2d380936f2560191bb960c51e9e64b82105a5097f325e262f5a47620a22e0ee2"} Jan 31 05:26:51 crc kubenswrapper[4832]: I0131 05:26:51.816404 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mcpwq" podStartSLOduration=3.335920586 podStartE2EDuration="5.816379584s" podCreationTimestamp="2026-01-31 05:26:46 +0000 UTC" firstStartedPulling="2026-01-31 05:26:48.732723803 +0000 UTC m=+2617.681545488" lastFinishedPulling="2026-01-31 05:26:51.213182801 +0000 UTC m=+2620.162004486" observedRunningTime="2026-01-31 05:26:51.798170796 +0000 UTC m=+2620.746992481" watchObservedRunningTime="2026-01-31 05:26:51.816379584 +0000 UTC m=+2620.765201279" Jan 31 05:26:56 crc kubenswrapper[4832]: I0131 05:26:56.973327 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:26:56 crc kubenswrapper[4832]: I0131 05:26:56.973998 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:26:57 crc kubenswrapper[4832]: I0131 05:26:57.020130 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:26:57 crc kubenswrapper[4832]: I0131 05:26:57.886853 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:26:57 crc kubenswrapper[4832]: I0131 05:26:57.958244 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mcpwq"] Jan 31 05:26:59 crc kubenswrapper[4832]: I0131 05:26:59.858217 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mcpwq" podUID="1ad8206b-2dc9-4809-8edb-4afba36fd7f3" containerName="registry-server" containerID="cri-o://d8921ced88f6f3a984aacff23f41ca35f3a55e1f062daa9cef3d921607f42ad4" gracePeriod=2 Jan 31 05:27:00 crc kubenswrapper[4832]: I0131 05:27:00.862073 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:27:00 crc kubenswrapper[4832]: I0131 05:27:00.874216 4832 generic.go:334] "Generic (PLEG): container finished" podID="1ad8206b-2dc9-4809-8edb-4afba36fd7f3" containerID="d8921ced88f6f3a984aacff23f41ca35f3a55e1f062daa9cef3d921607f42ad4" exitCode=0 Jan 31 05:27:00 crc kubenswrapper[4832]: I0131 05:27:00.874269 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcpwq" event={"ID":"1ad8206b-2dc9-4809-8edb-4afba36fd7f3","Type":"ContainerDied","Data":"d8921ced88f6f3a984aacff23f41ca35f3a55e1f062daa9cef3d921607f42ad4"} Jan 31 05:27:00 crc kubenswrapper[4832]: I0131 05:27:00.874301 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mcpwq" event={"ID":"1ad8206b-2dc9-4809-8edb-4afba36fd7f3","Type":"ContainerDied","Data":"219b35b5893b2ce1f6a5c337b05749eb49b2f1ca5bd53c7ddacba9bb4cefd166"} Jan 31 05:27:00 crc kubenswrapper[4832]: I0131 05:27:00.874321 4832 scope.go:117] "RemoveContainer" containerID="d8921ced88f6f3a984aacff23f41ca35f3a55e1f062daa9cef3d921607f42ad4" Jan 31 05:27:00 crc kubenswrapper[4832]: I0131 05:27:00.874492 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mcpwq" Jan 31 05:27:00 crc kubenswrapper[4832]: I0131 05:27:00.914670 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkrvv\" (UniqueName: \"kubernetes.io/projected/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-kube-api-access-tkrvv\") pod \"1ad8206b-2dc9-4809-8edb-4afba36fd7f3\" (UID: \"1ad8206b-2dc9-4809-8edb-4afba36fd7f3\") " Jan 31 05:27:00 crc kubenswrapper[4832]: I0131 05:27:00.915442 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-utilities\") pod \"1ad8206b-2dc9-4809-8edb-4afba36fd7f3\" (UID: \"1ad8206b-2dc9-4809-8edb-4afba36fd7f3\") " Jan 31 05:27:00 crc kubenswrapper[4832]: I0131 05:27:00.915596 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-catalog-content\") pod \"1ad8206b-2dc9-4809-8edb-4afba36fd7f3\" (UID: \"1ad8206b-2dc9-4809-8edb-4afba36fd7f3\") " Jan 31 05:27:00 crc kubenswrapper[4832]: I0131 05:27:00.916707 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-utilities" (OuterVolumeSpecName: "utilities") pod "1ad8206b-2dc9-4809-8edb-4afba36fd7f3" (UID: "1ad8206b-2dc9-4809-8edb-4afba36fd7f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:27:00 crc kubenswrapper[4832]: I0131 05:27:00.926086 4832 scope.go:117] "RemoveContainer" containerID="10610b8b4b1009339c3e634f6ff650c44e13b567a2fb9b61f8254815cf8dd898" Jan 31 05:27:00 crc kubenswrapper[4832]: I0131 05:27:00.926768 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-kube-api-access-tkrvv" (OuterVolumeSpecName: "kube-api-access-tkrvv") pod "1ad8206b-2dc9-4809-8edb-4afba36fd7f3" (UID: "1ad8206b-2dc9-4809-8edb-4afba36fd7f3"). InnerVolumeSpecName "kube-api-access-tkrvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:27:00 crc kubenswrapper[4832]: I0131 05:27:00.987301 4832 scope.go:117] "RemoveContainer" containerID="fcc8054ff0f24d50d76c328a02ed2d1225f18e9ad65f740e4d98c22cef0035a8" Jan 31 05:27:00 crc kubenswrapper[4832]: I0131 05:27:00.988501 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ad8206b-2dc9-4809-8edb-4afba36fd7f3" (UID: "1ad8206b-2dc9-4809-8edb-4afba36fd7f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:27:01 crc kubenswrapper[4832]: I0131 05:27:01.017920 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:01 crc kubenswrapper[4832]: I0131 05:27:01.018624 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkrvv\" (UniqueName: \"kubernetes.io/projected/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-kube-api-access-tkrvv\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:01 crc kubenswrapper[4832]: I0131 05:27:01.018644 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad8206b-2dc9-4809-8edb-4afba36fd7f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:27:01 crc kubenswrapper[4832]: I0131 05:27:01.025382 4832 scope.go:117] "RemoveContainer" containerID="d8921ced88f6f3a984aacff23f41ca35f3a55e1f062daa9cef3d921607f42ad4" Jan 31 05:27:01 crc kubenswrapper[4832]: E0131 05:27:01.026652 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8921ced88f6f3a984aacff23f41ca35f3a55e1f062daa9cef3d921607f42ad4\": container with ID starting with d8921ced88f6f3a984aacff23f41ca35f3a55e1f062daa9cef3d921607f42ad4 not found: ID does not exist" containerID="d8921ced88f6f3a984aacff23f41ca35f3a55e1f062daa9cef3d921607f42ad4" Jan 31 05:27:01 crc kubenswrapper[4832]: I0131 05:27:01.026699 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8921ced88f6f3a984aacff23f41ca35f3a55e1f062daa9cef3d921607f42ad4"} err="failed to get container status \"d8921ced88f6f3a984aacff23f41ca35f3a55e1f062daa9cef3d921607f42ad4\": rpc error: code = NotFound desc = could not find container \"d8921ced88f6f3a984aacff23f41ca35f3a55e1f062daa9cef3d921607f42ad4\": container with ID starting with d8921ced88f6f3a984aacff23f41ca35f3a55e1f062daa9cef3d921607f42ad4 not found: ID does not exist" Jan 31 05:27:01 crc kubenswrapper[4832]: I0131 05:27:01.026736 4832 scope.go:117] "RemoveContainer" containerID="10610b8b4b1009339c3e634f6ff650c44e13b567a2fb9b61f8254815cf8dd898" Jan 31 05:27:01 crc kubenswrapper[4832]: E0131 05:27:01.027316 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10610b8b4b1009339c3e634f6ff650c44e13b567a2fb9b61f8254815cf8dd898\": container with ID starting with 10610b8b4b1009339c3e634f6ff650c44e13b567a2fb9b61f8254815cf8dd898 not found: ID does not exist" containerID="10610b8b4b1009339c3e634f6ff650c44e13b567a2fb9b61f8254815cf8dd898" Jan 31 05:27:01 crc kubenswrapper[4832]: I0131 05:27:01.027344 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10610b8b4b1009339c3e634f6ff650c44e13b567a2fb9b61f8254815cf8dd898"} err="failed to get container status \"10610b8b4b1009339c3e634f6ff650c44e13b567a2fb9b61f8254815cf8dd898\": rpc error: code = NotFound desc = could not find container \"10610b8b4b1009339c3e634f6ff650c44e13b567a2fb9b61f8254815cf8dd898\": container with ID starting with 10610b8b4b1009339c3e634f6ff650c44e13b567a2fb9b61f8254815cf8dd898 not found: ID does not exist" Jan 31 05:27:01 crc kubenswrapper[4832]: I0131 05:27:01.027358 4832 scope.go:117] "RemoveContainer" containerID="fcc8054ff0f24d50d76c328a02ed2d1225f18e9ad65f740e4d98c22cef0035a8" Jan 31 05:27:01 crc kubenswrapper[4832]: E0131 05:27:01.027672 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcc8054ff0f24d50d76c328a02ed2d1225f18e9ad65f740e4d98c22cef0035a8\": container with ID starting with fcc8054ff0f24d50d76c328a02ed2d1225f18e9ad65f740e4d98c22cef0035a8 not found: ID does not exist" containerID="fcc8054ff0f24d50d76c328a02ed2d1225f18e9ad65f740e4d98c22cef0035a8" Jan 31 05:27:01 crc kubenswrapper[4832]: I0131 05:27:01.027773 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcc8054ff0f24d50d76c328a02ed2d1225f18e9ad65f740e4d98c22cef0035a8"} err="failed to get container status \"fcc8054ff0f24d50d76c328a02ed2d1225f18e9ad65f740e4d98c22cef0035a8\": rpc error: code = NotFound desc = could not find container \"fcc8054ff0f24d50d76c328a02ed2d1225f18e9ad65f740e4d98c22cef0035a8\": container with ID starting with fcc8054ff0f24d50d76c328a02ed2d1225f18e9ad65f740e4d98c22cef0035a8 not found: ID does not exist" Jan 31 05:27:01 crc kubenswrapper[4832]: I0131 05:27:01.235892 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mcpwq"] Jan 31 05:27:01 crc kubenswrapper[4832]: I0131 05:27:01.241347 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mcpwq"] Jan 31 05:27:01 crc kubenswrapper[4832]: I0131 05:27:01.875244 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ad8206b-2dc9-4809-8edb-4afba36fd7f3" path="/var/lib/kubelet/pods/1ad8206b-2dc9-4809-8edb-4afba36fd7f3/volumes" Jan 31 05:27:26 crc kubenswrapper[4832]: E0131 05:27:26.252250 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 31 05:27:26 crc kubenswrapper[4832]: E0131 05:27:26.252856 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpr9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(cf637281-101a-4e11-93b6-74f55d914798): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 05:27:26 crc kubenswrapper[4832]: E0131 05:27:26.254035 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="cf637281-101a-4e11-93b6-74f55d914798" Jan 31 05:27:27 crc kubenswrapper[4832]: E0131 05:27:27.195598 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="cf637281-101a-4e11-93b6-74f55d914798" Jan 31 05:27:45 crc kubenswrapper[4832]: I0131 05:27:45.004506 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 05:27:46 crc kubenswrapper[4832]: I0131 05:27:46.362440 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"cf637281-101a-4e11-93b6-74f55d914798","Type":"ContainerStarted","Data":"4dbe5d754eb37dc90b45f261fd74b5577857c3c0040bfd8721da90a191637d5f"} Jan 31 05:27:46 crc kubenswrapper[4832]: I0131 05:27:46.384666 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.851683605 podStartE2EDuration="57.384639687s" podCreationTimestamp="2026-01-31 05:26:49 +0000 UTC" firstStartedPulling="2026-01-31 05:26:51.468758418 +0000 UTC m=+2620.417580103" lastFinishedPulling="2026-01-31 05:27:45.00171449 +0000 UTC m=+2673.950536185" observedRunningTime="2026-01-31 05:27:46.380955232 +0000 UTC m=+2675.329776947" watchObservedRunningTime="2026-01-31 05:27:46.384639687 +0000 UTC m=+2675.333461372" Jan 31 05:27:48 crc kubenswrapper[4832]: I0131 05:27:48.540152 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:27:48 crc kubenswrapper[4832]: I0131 05:27:48.541158 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:28:18 crc kubenswrapper[4832]: I0131 05:28:18.540457 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:28:18 crc kubenswrapper[4832]: I0131 05:28:18.541199 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:28:48 crc kubenswrapper[4832]: I0131 05:28:48.539974 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:28:48 crc kubenswrapper[4832]: I0131 05:28:48.540714 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:28:48 crc kubenswrapper[4832]: I0131 05:28:48.540789 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 05:28:48 crc kubenswrapper[4832]: I0131 05:28:48.542001 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c772ee0e09a64b1f8496d5e09ab6eed6cac2fb7a8d75e579d85cfcfbb8a01a58"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:28:48 crc kubenswrapper[4832]: I0131 05:28:48.542135 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://c772ee0e09a64b1f8496d5e09ab6eed6cac2fb7a8d75e579d85cfcfbb8a01a58" gracePeriod=600 Jan 31 05:28:49 crc kubenswrapper[4832]: I0131 05:28:49.211997 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="c772ee0e09a64b1f8496d5e09ab6eed6cac2fb7a8d75e579d85cfcfbb8a01a58" exitCode=0 Jan 31 05:28:49 crc kubenswrapper[4832]: I0131 05:28:49.212102 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"c772ee0e09a64b1f8496d5e09ab6eed6cac2fb7a8d75e579d85cfcfbb8a01a58"} Jan 31 05:28:49 crc kubenswrapper[4832]: I0131 05:28:49.212725 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5"} Jan 31 05:28:49 crc kubenswrapper[4832]: I0131 05:28:49.212783 4832 scope.go:117] "RemoveContainer" containerID="b0de83c4d688d29ec0ab0a5779158dd514b7f51b8485563664b7c37ad4ee1c70" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.484127 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7dnxx"] Jan 31 05:29:20 crc kubenswrapper[4832]: E0131 05:29:20.486632 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad8206b-2dc9-4809-8edb-4afba36fd7f3" containerName="extract-utilities" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.486751 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad8206b-2dc9-4809-8edb-4afba36fd7f3" containerName="extract-utilities" Jan 31 05:29:20 crc kubenswrapper[4832]: E0131 05:29:20.486867 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad8206b-2dc9-4809-8edb-4afba36fd7f3" containerName="registry-server" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.486951 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad8206b-2dc9-4809-8edb-4afba36fd7f3" containerName="registry-server" Jan 31 05:29:20 crc kubenswrapper[4832]: E0131 05:29:20.487045 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad8206b-2dc9-4809-8edb-4afba36fd7f3" containerName="extract-content" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.487123 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad8206b-2dc9-4809-8edb-4afba36fd7f3" containerName="extract-content" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.487441 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ad8206b-2dc9-4809-8edb-4afba36fd7f3" containerName="registry-server" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.489148 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.517698 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7dnxx"] Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.679551 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7h6r\" (UniqueName: \"kubernetes.io/projected/c8f598a0-9279-4442-845a-cff7fa4fdeb5-kube-api-access-w7h6r\") pod \"redhat-operators-7dnxx\" (UID: \"c8f598a0-9279-4442-845a-cff7fa4fdeb5\") " pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.679816 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f598a0-9279-4442-845a-cff7fa4fdeb5-utilities\") pod \"redhat-operators-7dnxx\" (UID: \"c8f598a0-9279-4442-845a-cff7fa4fdeb5\") " pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.679844 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f598a0-9279-4442-845a-cff7fa4fdeb5-catalog-content\") pod \"redhat-operators-7dnxx\" (UID: \"c8f598a0-9279-4442-845a-cff7fa4fdeb5\") " pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.782037 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f598a0-9279-4442-845a-cff7fa4fdeb5-utilities\") pod \"redhat-operators-7dnxx\" (UID: \"c8f598a0-9279-4442-845a-cff7fa4fdeb5\") " pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.782082 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f598a0-9279-4442-845a-cff7fa4fdeb5-catalog-content\") pod \"redhat-operators-7dnxx\" (UID: \"c8f598a0-9279-4442-845a-cff7fa4fdeb5\") " pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.782119 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7h6r\" (UniqueName: \"kubernetes.io/projected/c8f598a0-9279-4442-845a-cff7fa4fdeb5-kube-api-access-w7h6r\") pod \"redhat-operators-7dnxx\" (UID: \"c8f598a0-9279-4442-845a-cff7fa4fdeb5\") " pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.782611 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f598a0-9279-4442-845a-cff7fa4fdeb5-catalog-content\") pod \"redhat-operators-7dnxx\" (UID: \"c8f598a0-9279-4442-845a-cff7fa4fdeb5\") " pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.782728 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f598a0-9279-4442-845a-cff7fa4fdeb5-utilities\") pod \"redhat-operators-7dnxx\" (UID: \"c8f598a0-9279-4442-845a-cff7fa4fdeb5\") " pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.802154 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7h6r\" (UniqueName: \"kubernetes.io/projected/c8f598a0-9279-4442-845a-cff7fa4fdeb5-kube-api-access-w7h6r\") pod \"redhat-operators-7dnxx\" (UID: \"c8f598a0-9279-4442-845a-cff7fa4fdeb5\") " pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:20 crc kubenswrapper[4832]: I0131 05:29:20.819341 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:21 crc kubenswrapper[4832]: I0131 05:29:21.288113 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7dnxx"] Jan 31 05:29:21 crc kubenswrapper[4832]: I0131 05:29:21.563907 4832 generic.go:334] "Generic (PLEG): container finished" podID="c8f598a0-9279-4442-845a-cff7fa4fdeb5" containerID="392b37ed2839f3b3e7442c3554d4dc3b541d4969328fe921a02ed6e0e3260347" exitCode=0 Jan 31 05:29:21 crc kubenswrapper[4832]: I0131 05:29:21.564109 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dnxx" event={"ID":"c8f598a0-9279-4442-845a-cff7fa4fdeb5","Type":"ContainerDied","Data":"392b37ed2839f3b3e7442c3554d4dc3b541d4969328fe921a02ed6e0e3260347"} Jan 31 05:29:21 crc kubenswrapper[4832]: I0131 05:29:21.564170 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dnxx" event={"ID":"c8f598a0-9279-4442-845a-cff7fa4fdeb5","Type":"ContainerStarted","Data":"f0fe2a9919eac5aacde76fbd62fa1497ce3e328614915cc91ab9b67f048d5cfb"} Jan 31 05:29:21 crc kubenswrapper[4832]: I0131 05:29:21.565905 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:29:22 crc kubenswrapper[4832]: I0131 05:29:22.580608 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dnxx" event={"ID":"c8f598a0-9279-4442-845a-cff7fa4fdeb5","Type":"ContainerStarted","Data":"b5c0198d260ec99e65bb7d27ca99f7ae1ecddd1f6cc3917e028be98dc509444f"} Jan 31 05:29:24 crc kubenswrapper[4832]: I0131 05:29:24.616734 4832 generic.go:334] "Generic (PLEG): container finished" podID="c8f598a0-9279-4442-845a-cff7fa4fdeb5" containerID="b5c0198d260ec99e65bb7d27ca99f7ae1ecddd1f6cc3917e028be98dc509444f" exitCode=0 Jan 31 05:29:24 crc kubenswrapper[4832]: I0131 05:29:24.617364 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dnxx" event={"ID":"c8f598a0-9279-4442-845a-cff7fa4fdeb5","Type":"ContainerDied","Data":"b5c0198d260ec99e65bb7d27ca99f7ae1ecddd1f6cc3917e028be98dc509444f"} Jan 31 05:29:26 crc kubenswrapper[4832]: I0131 05:29:26.659172 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dnxx" event={"ID":"c8f598a0-9279-4442-845a-cff7fa4fdeb5","Type":"ContainerStarted","Data":"d1acf5dcc7a8f30354a58c5a8f4f1c0e9db6e7f56a228656ca64f5da22a6722c"} Jan 31 05:29:26 crc kubenswrapper[4832]: I0131 05:29:26.690169 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7dnxx" podStartSLOduration=2.910565955 podStartE2EDuration="6.690143139s" podCreationTimestamp="2026-01-31 05:29:20 +0000 UTC" firstStartedPulling="2026-01-31 05:29:21.565626993 +0000 UTC m=+2770.514448678" lastFinishedPulling="2026-01-31 05:29:25.345204177 +0000 UTC m=+2774.294025862" observedRunningTime="2026-01-31 05:29:26.683965196 +0000 UTC m=+2775.632786901" watchObservedRunningTime="2026-01-31 05:29:26.690143139 +0000 UTC m=+2775.638964834" Jan 31 05:29:30 crc kubenswrapper[4832]: I0131 05:29:30.819825 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:30 crc kubenswrapper[4832]: I0131 05:29:30.821354 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:31 crc kubenswrapper[4832]: I0131 05:29:31.905859 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7dnxx" podUID="c8f598a0-9279-4442-845a-cff7fa4fdeb5" containerName="registry-server" probeResult="failure" output=< Jan 31 05:29:31 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 31 05:29:31 crc kubenswrapper[4832]: > Jan 31 05:29:40 crc kubenswrapper[4832]: I0131 05:29:40.866332 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:40 crc kubenswrapper[4832]: I0131 05:29:40.912955 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:41 crc kubenswrapper[4832]: I0131 05:29:41.121093 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7dnxx"] Jan 31 05:29:42 crc kubenswrapper[4832]: I0131 05:29:42.849666 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7dnxx" podUID="c8f598a0-9279-4442-845a-cff7fa4fdeb5" containerName="registry-server" containerID="cri-o://d1acf5dcc7a8f30354a58c5a8f4f1c0e9db6e7f56a228656ca64f5da22a6722c" gracePeriod=2 Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.323091 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.406195 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f598a0-9279-4442-845a-cff7fa4fdeb5-utilities\") pod \"c8f598a0-9279-4442-845a-cff7fa4fdeb5\" (UID: \"c8f598a0-9279-4442-845a-cff7fa4fdeb5\") " Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.406374 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7h6r\" (UniqueName: \"kubernetes.io/projected/c8f598a0-9279-4442-845a-cff7fa4fdeb5-kube-api-access-w7h6r\") pod \"c8f598a0-9279-4442-845a-cff7fa4fdeb5\" (UID: \"c8f598a0-9279-4442-845a-cff7fa4fdeb5\") " Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.406574 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f598a0-9279-4442-845a-cff7fa4fdeb5-catalog-content\") pod \"c8f598a0-9279-4442-845a-cff7fa4fdeb5\" (UID: \"c8f598a0-9279-4442-845a-cff7fa4fdeb5\") " Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.407123 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f598a0-9279-4442-845a-cff7fa4fdeb5-utilities" (OuterVolumeSpecName: "utilities") pod "c8f598a0-9279-4442-845a-cff7fa4fdeb5" (UID: "c8f598a0-9279-4442-845a-cff7fa4fdeb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.415467 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f598a0-9279-4442-845a-cff7fa4fdeb5-kube-api-access-w7h6r" (OuterVolumeSpecName: "kube-api-access-w7h6r") pod "c8f598a0-9279-4442-845a-cff7fa4fdeb5" (UID: "c8f598a0-9279-4442-845a-cff7fa4fdeb5"). InnerVolumeSpecName "kube-api-access-w7h6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.510934 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7h6r\" (UniqueName: \"kubernetes.io/projected/c8f598a0-9279-4442-845a-cff7fa4fdeb5-kube-api-access-w7h6r\") on node \"crc\" DevicePath \"\"" Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.510961 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f598a0-9279-4442-845a-cff7fa4fdeb5-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.542892 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f598a0-9279-4442-845a-cff7fa4fdeb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8f598a0-9279-4442-845a-cff7fa4fdeb5" (UID: "c8f598a0-9279-4442-845a-cff7fa4fdeb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.612835 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f598a0-9279-4442-845a-cff7fa4fdeb5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.863939 4832 generic.go:334] "Generic (PLEG): container finished" podID="c8f598a0-9279-4442-845a-cff7fa4fdeb5" containerID="d1acf5dcc7a8f30354a58c5a8f4f1c0e9db6e7f56a228656ca64f5da22a6722c" exitCode=0 Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.864054 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7dnxx" Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.874056 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dnxx" event={"ID":"c8f598a0-9279-4442-845a-cff7fa4fdeb5","Type":"ContainerDied","Data":"d1acf5dcc7a8f30354a58c5a8f4f1c0e9db6e7f56a228656ca64f5da22a6722c"} Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.874108 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7dnxx" event={"ID":"c8f598a0-9279-4442-845a-cff7fa4fdeb5","Type":"ContainerDied","Data":"f0fe2a9919eac5aacde76fbd62fa1497ce3e328614915cc91ab9b67f048d5cfb"} Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.874132 4832 scope.go:117] "RemoveContainer" containerID="d1acf5dcc7a8f30354a58c5a8f4f1c0e9db6e7f56a228656ca64f5da22a6722c" Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.902882 4832 scope.go:117] "RemoveContainer" containerID="b5c0198d260ec99e65bb7d27ca99f7ae1ecddd1f6cc3917e028be98dc509444f" Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.917766 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7dnxx"] Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.930206 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7dnxx"] Jan 31 05:29:43 crc kubenswrapper[4832]: I0131 05:29:43.940635 4832 scope.go:117] "RemoveContainer" containerID="392b37ed2839f3b3e7442c3554d4dc3b541d4969328fe921a02ed6e0e3260347" Jan 31 05:29:44 crc kubenswrapper[4832]: I0131 05:29:44.006807 4832 scope.go:117] "RemoveContainer" containerID="d1acf5dcc7a8f30354a58c5a8f4f1c0e9db6e7f56a228656ca64f5da22a6722c" Jan 31 05:29:44 crc kubenswrapper[4832]: E0131 05:29:44.007626 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1acf5dcc7a8f30354a58c5a8f4f1c0e9db6e7f56a228656ca64f5da22a6722c\": container with ID starting with d1acf5dcc7a8f30354a58c5a8f4f1c0e9db6e7f56a228656ca64f5da22a6722c not found: ID does not exist" containerID="d1acf5dcc7a8f30354a58c5a8f4f1c0e9db6e7f56a228656ca64f5da22a6722c" Jan 31 05:29:44 crc kubenswrapper[4832]: I0131 05:29:44.007687 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1acf5dcc7a8f30354a58c5a8f4f1c0e9db6e7f56a228656ca64f5da22a6722c"} err="failed to get container status \"d1acf5dcc7a8f30354a58c5a8f4f1c0e9db6e7f56a228656ca64f5da22a6722c\": rpc error: code = NotFound desc = could not find container \"d1acf5dcc7a8f30354a58c5a8f4f1c0e9db6e7f56a228656ca64f5da22a6722c\": container with ID starting with d1acf5dcc7a8f30354a58c5a8f4f1c0e9db6e7f56a228656ca64f5da22a6722c not found: ID does not exist" Jan 31 05:29:44 crc kubenswrapper[4832]: I0131 05:29:44.007721 4832 scope.go:117] "RemoveContainer" containerID="b5c0198d260ec99e65bb7d27ca99f7ae1ecddd1f6cc3917e028be98dc509444f" Jan 31 05:29:44 crc kubenswrapper[4832]: E0131 05:29:44.015981 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5c0198d260ec99e65bb7d27ca99f7ae1ecddd1f6cc3917e028be98dc509444f\": container with ID starting with b5c0198d260ec99e65bb7d27ca99f7ae1ecddd1f6cc3917e028be98dc509444f not found: ID does not exist" containerID="b5c0198d260ec99e65bb7d27ca99f7ae1ecddd1f6cc3917e028be98dc509444f" Jan 31 05:29:44 crc kubenswrapper[4832]: I0131 05:29:44.016023 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5c0198d260ec99e65bb7d27ca99f7ae1ecddd1f6cc3917e028be98dc509444f"} err="failed to get container status \"b5c0198d260ec99e65bb7d27ca99f7ae1ecddd1f6cc3917e028be98dc509444f\": rpc error: code = NotFound desc = could not find container \"b5c0198d260ec99e65bb7d27ca99f7ae1ecddd1f6cc3917e028be98dc509444f\": container with ID starting with b5c0198d260ec99e65bb7d27ca99f7ae1ecddd1f6cc3917e028be98dc509444f not found: ID does not exist" Jan 31 05:29:44 crc kubenswrapper[4832]: I0131 05:29:44.016053 4832 scope.go:117] "RemoveContainer" containerID="392b37ed2839f3b3e7442c3554d4dc3b541d4969328fe921a02ed6e0e3260347" Jan 31 05:29:44 crc kubenswrapper[4832]: E0131 05:29:44.016484 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"392b37ed2839f3b3e7442c3554d4dc3b541d4969328fe921a02ed6e0e3260347\": container with ID starting with 392b37ed2839f3b3e7442c3554d4dc3b541d4969328fe921a02ed6e0e3260347 not found: ID does not exist" containerID="392b37ed2839f3b3e7442c3554d4dc3b541d4969328fe921a02ed6e0e3260347" Jan 31 05:29:44 crc kubenswrapper[4832]: I0131 05:29:44.016538 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"392b37ed2839f3b3e7442c3554d4dc3b541d4969328fe921a02ed6e0e3260347"} err="failed to get container status \"392b37ed2839f3b3e7442c3554d4dc3b541d4969328fe921a02ed6e0e3260347\": rpc error: code = NotFound desc = could not find container \"392b37ed2839f3b3e7442c3554d4dc3b541d4969328fe921a02ed6e0e3260347\": container with ID starting with 392b37ed2839f3b3e7442c3554d4dc3b541d4969328fe921a02ed6e0e3260347 not found: ID does not exist" Jan 31 05:29:45 crc kubenswrapper[4832]: I0131 05:29:45.878763 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f598a0-9279-4442-845a-cff7fa4fdeb5" path="/var/lib/kubelet/pods/c8f598a0-9279-4442-845a-cff7fa4fdeb5/volumes" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.168817 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl"] Jan 31 05:30:00 crc kubenswrapper[4832]: E0131 05:30:00.170672 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f598a0-9279-4442-845a-cff7fa4fdeb5" containerName="extract-content" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.170725 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f598a0-9279-4442-845a-cff7fa4fdeb5" containerName="extract-content" Jan 31 05:30:00 crc kubenswrapper[4832]: E0131 05:30:00.170762 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f598a0-9279-4442-845a-cff7fa4fdeb5" containerName="extract-utilities" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.170773 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f598a0-9279-4442-845a-cff7fa4fdeb5" containerName="extract-utilities" Jan 31 05:30:00 crc kubenswrapper[4832]: E0131 05:30:00.170801 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f598a0-9279-4442-845a-cff7fa4fdeb5" containerName="registry-server" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.170812 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f598a0-9279-4442-845a-cff7fa4fdeb5" containerName="registry-server" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.171160 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f598a0-9279-4442-845a-cff7fa4fdeb5" containerName="registry-server" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.172189 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.176909 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.177028 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.195850 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl"] Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.343607 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-secret-volume\") pod \"collect-profiles-29497290-jt4zl\" (UID: \"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.343692 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8ndd\" (UniqueName: \"kubernetes.io/projected/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-kube-api-access-x8ndd\") pod \"collect-profiles-29497290-jt4zl\" (UID: \"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.343771 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-config-volume\") pod \"collect-profiles-29497290-jt4zl\" (UID: \"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.445885 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-secret-volume\") pod \"collect-profiles-29497290-jt4zl\" (UID: \"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.445956 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8ndd\" (UniqueName: \"kubernetes.io/projected/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-kube-api-access-x8ndd\") pod \"collect-profiles-29497290-jt4zl\" (UID: \"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.446015 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-config-volume\") pod \"collect-profiles-29497290-jt4zl\" (UID: \"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.447465 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-config-volume\") pod \"collect-profiles-29497290-jt4zl\" (UID: \"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.460460 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-secret-volume\") pod \"collect-profiles-29497290-jt4zl\" (UID: \"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.474693 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8ndd\" (UniqueName: \"kubernetes.io/projected/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-kube-api-access-x8ndd\") pod \"collect-profiles-29497290-jt4zl\" (UID: \"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" Jan 31 05:30:00 crc kubenswrapper[4832]: I0131 05:30:00.503682 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" Jan 31 05:30:01 crc kubenswrapper[4832]: I0131 05:30:01.032486 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl"] Jan 31 05:30:01 crc kubenswrapper[4832]: I0131 05:30:01.105184 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" event={"ID":"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b","Type":"ContainerStarted","Data":"6818d97b6cd5de56de887e0bc33c8ceb9168a9b947110ee65720118d3cd0fc10"} Jan 31 05:30:02 crc kubenswrapper[4832]: I0131 05:30:02.118746 4832 generic.go:334] "Generic (PLEG): container finished" podID="e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b" containerID="69bbd0e20158db39d493961bf5eda314c69533751e87f8fdac2ee4e31e3e8809" exitCode=0 Jan 31 05:30:02 crc kubenswrapper[4832]: I0131 05:30:02.118841 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" event={"ID":"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b","Type":"ContainerDied","Data":"69bbd0e20158db39d493961bf5eda314c69533751e87f8fdac2ee4e31e3e8809"} Jan 31 05:30:03 crc kubenswrapper[4832]: I0131 05:30:03.581291 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" Jan 31 05:30:03 crc kubenswrapper[4832]: I0131 05:30:03.725408 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8ndd\" (UniqueName: \"kubernetes.io/projected/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-kube-api-access-x8ndd\") pod \"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b\" (UID: \"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b\") " Jan 31 05:30:03 crc kubenswrapper[4832]: I0131 05:30:03.725550 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-config-volume\") pod \"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b\" (UID: \"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b\") " Jan 31 05:30:03 crc kubenswrapper[4832]: I0131 05:30:03.725716 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-secret-volume\") pod \"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b\" (UID: \"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b\") " Jan 31 05:30:03 crc kubenswrapper[4832]: I0131 05:30:03.726847 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-config-volume" (OuterVolumeSpecName: "config-volume") pod "e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b" (UID: "e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:30:03 crc kubenswrapper[4832]: I0131 05:30:03.733398 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-kube-api-access-x8ndd" (OuterVolumeSpecName: "kube-api-access-x8ndd") pod "e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b" (UID: "e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b"). InnerVolumeSpecName "kube-api-access-x8ndd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:30:03 crc kubenswrapper[4832]: I0131 05:30:03.734209 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b" (UID: "e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:30:03 crc kubenswrapper[4832]: I0131 05:30:03.829303 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:30:03 crc kubenswrapper[4832]: I0131 05:30:03.829384 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:30:03 crc kubenswrapper[4832]: I0131 05:30:03.829411 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8ndd\" (UniqueName: \"kubernetes.io/projected/e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b-kube-api-access-x8ndd\") on node \"crc\" DevicePath \"\"" Jan 31 05:30:04 crc kubenswrapper[4832]: I0131 05:30:04.144392 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" event={"ID":"e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b","Type":"ContainerDied","Data":"6818d97b6cd5de56de887e0bc33c8ceb9168a9b947110ee65720118d3cd0fc10"} Jan 31 05:30:04 crc kubenswrapper[4832]: I0131 05:30:04.144449 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6818d97b6cd5de56de887e0bc33c8ceb9168a9b947110ee65720118d3cd0fc10" Jan 31 05:30:04 crc kubenswrapper[4832]: I0131 05:30:04.144507 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497290-jt4zl" Jan 31 05:30:04 crc kubenswrapper[4832]: I0131 05:30:04.660016 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44"] Jan 31 05:30:04 crc kubenswrapper[4832]: I0131 05:30:04.672060 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497245-nhr44"] Jan 31 05:30:05 crc kubenswrapper[4832]: I0131 05:30:05.873583 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c250428-30ed-4355-9fee-712f4471071c" path="/var/lib/kubelet/pods/3c250428-30ed-4355-9fee-712f4471071c/volumes" Jan 31 05:30:26 crc kubenswrapper[4832]: I0131 05:30:26.160996 4832 scope.go:117] "RemoveContainer" containerID="bbe82be41a5c757c99c249fb8da52bdd964f456dabf3879498a488e06ad1378e" Jan 31 05:30:48 crc kubenswrapper[4832]: I0131 05:30:48.540498 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:30:48 crc kubenswrapper[4832]: I0131 05:30:48.541188 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:31:18 crc kubenswrapper[4832]: I0131 05:31:18.540267 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:31:18 crc kubenswrapper[4832]: I0131 05:31:18.540734 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:31:48 crc kubenswrapper[4832]: I0131 05:31:48.540331 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:31:48 crc kubenswrapper[4832]: I0131 05:31:48.540915 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:31:48 crc kubenswrapper[4832]: I0131 05:31:48.540969 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 05:31:48 crc kubenswrapper[4832]: I0131 05:31:48.541622 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:31:48 crc kubenswrapper[4832]: I0131 05:31:48.541697 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" gracePeriod=600 Jan 31 05:31:48 crc kubenswrapper[4832]: E0131 05:31:48.671779 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:31:49 crc kubenswrapper[4832]: I0131 05:31:49.233834 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" exitCode=0 Jan 31 05:31:49 crc kubenswrapper[4832]: I0131 05:31:49.233888 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5"} Jan 31 05:31:49 crc kubenswrapper[4832]: I0131 05:31:49.234265 4832 scope.go:117] "RemoveContainer" containerID="c772ee0e09a64b1f8496d5e09ab6eed6cac2fb7a8d75e579d85cfcfbb8a01a58" Jan 31 05:31:49 crc kubenswrapper[4832]: I0131 05:31:49.235313 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:31:49 crc kubenswrapper[4832]: E0131 05:31:49.235920 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:32:04 crc kubenswrapper[4832]: I0131 05:32:04.860746 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:32:04 crc kubenswrapper[4832]: E0131 05:32:04.861804 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:32:18 crc kubenswrapper[4832]: I0131 05:32:18.860008 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:32:18 crc kubenswrapper[4832]: E0131 05:32:18.861471 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:32:29 crc kubenswrapper[4832]: I0131 05:32:29.860597 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:32:29 crc kubenswrapper[4832]: E0131 05:32:29.863258 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:32:43 crc kubenswrapper[4832]: I0131 05:32:43.860122 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:32:43 crc kubenswrapper[4832]: E0131 05:32:43.861826 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:32:58 crc kubenswrapper[4832]: I0131 05:32:58.859396 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:32:58 crc kubenswrapper[4832]: E0131 05:32:58.860449 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:33:09 crc kubenswrapper[4832]: I0131 05:33:09.861193 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:33:09 crc kubenswrapper[4832]: E0131 05:33:09.862357 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:33:20 crc kubenswrapper[4832]: I0131 05:33:20.859623 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:33:20 crc kubenswrapper[4832]: E0131 05:33:20.860494 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:33:35 crc kubenswrapper[4832]: I0131 05:33:35.860514 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:33:35 crc kubenswrapper[4832]: E0131 05:33:35.861397 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:33:51 crc kubenswrapper[4832]: I0131 05:33:51.871418 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:33:51 crc kubenswrapper[4832]: E0131 05:33:51.872949 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:34:02 crc kubenswrapper[4832]: I0131 05:34:02.860878 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:34:02 crc kubenswrapper[4832]: E0131 05:34:02.862340 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:34:14 crc kubenswrapper[4832]: I0131 05:34:14.860446 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:34:14 crc kubenswrapper[4832]: E0131 05:34:14.861677 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:34:28 crc kubenswrapper[4832]: I0131 05:34:28.860629 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:34:28 crc kubenswrapper[4832]: E0131 05:34:28.861815 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:34:40 crc kubenswrapper[4832]: I0131 05:34:40.860139 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:34:40 crc kubenswrapper[4832]: E0131 05:34:40.860859 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:34:53 crc kubenswrapper[4832]: I0131 05:34:53.860047 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:34:53 crc kubenswrapper[4832]: E0131 05:34:53.861201 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:35:08 crc kubenswrapper[4832]: I0131 05:35:08.859618 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:35:08 crc kubenswrapper[4832]: E0131 05:35:08.860383 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:35:20 crc kubenswrapper[4832]: I0131 05:35:20.859288 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:35:20 crc kubenswrapper[4832]: E0131 05:35:20.860134 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:35:31 crc kubenswrapper[4832]: I0131 05:35:31.868432 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:35:31 crc kubenswrapper[4832]: E0131 05:35:31.869602 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:35:31 crc kubenswrapper[4832]: I0131 05:35:31.884202 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rjl6n"] Jan 31 05:35:31 crc kubenswrapper[4832]: E0131 05:35:31.888442 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b" containerName="collect-profiles" Jan 31 05:35:31 crc kubenswrapper[4832]: I0131 05:35:31.888489 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b" containerName="collect-profiles" Jan 31 05:35:31 crc kubenswrapper[4832]: I0131 05:35:31.888950 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16b39bb-39af-4c8f-96ef-f8a2ecb5ca0b" containerName="collect-profiles" Jan 31 05:35:31 crc kubenswrapper[4832]: I0131 05:35:31.892669 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:31 crc kubenswrapper[4832]: I0131 05:35:31.900470 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rjl6n"] Jan 31 05:35:31 crc kubenswrapper[4832]: I0131 05:35:31.993812 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c917947-392e-42c8-88cf-5918d9450905-utilities\") pod \"certified-operators-rjl6n\" (UID: \"9c917947-392e-42c8-88cf-5918d9450905\") " pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:31 crc kubenswrapper[4832]: I0131 05:35:31.994101 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx58v\" (UniqueName: \"kubernetes.io/projected/9c917947-392e-42c8-88cf-5918d9450905-kube-api-access-mx58v\") pod \"certified-operators-rjl6n\" (UID: \"9c917947-392e-42c8-88cf-5918d9450905\") " pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:31 crc kubenswrapper[4832]: I0131 05:35:31.994196 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c917947-392e-42c8-88cf-5918d9450905-catalog-content\") pod \"certified-operators-rjl6n\" (UID: \"9c917947-392e-42c8-88cf-5918d9450905\") " pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.074036 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-27r44"] Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.076478 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.093192 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-27r44"] Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.095614 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx58v\" (UniqueName: \"kubernetes.io/projected/9c917947-392e-42c8-88cf-5918d9450905-kube-api-access-mx58v\") pod \"certified-operators-rjl6n\" (UID: \"9c917947-392e-42c8-88cf-5918d9450905\") " pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.095723 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c917947-392e-42c8-88cf-5918d9450905-catalog-content\") pod \"certified-operators-rjl6n\" (UID: \"9c917947-392e-42c8-88cf-5918d9450905\") " pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.095776 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c917947-392e-42c8-88cf-5918d9450905-utilities\") pod \"certified-operators-rjl6n\" (UID: \"9c917947-392e-42c8-88cf-5918d9450905\") " pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.096213 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c917947-392e-42c8-88cf-5918d9450905-utilities\") pod \"certified-operators-rjl6n\" (UID: \"9c917947-392e-42c8-88cf-5918d9450905\") " pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.096669 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c917947-392e-42c8-88cf-5918d9450905-catalog-content\") pod \"certified-operators-rjl6n\" (UID: \"9c917947-392e-42c8-88cf-5918d9450905\") " pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.116757 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx58v\" (UniqueName: \"kubernetes.io/projected/9c917947-392e-42c8-88cf-5918d9450905-kube-api-access-mx58v\") pod \"certified-operators-rjl6n\" (UID: \"9c917947-392e-42c8-88cf-5918d9450905\") " pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.196909 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ecdda3-994f-4f40-928c-b7ce7f95896e-catalog-content\") pod \"redhat-marketplace-27r44\" (UID: \"95ecdda3-994f-4f40-928c-b7ce7f95896e\") " pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.196995 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ecdda3-994f-4f40-928c-b7ce7f95896e-utilities\") pod \"redhat-marketplace-27r44\" (UID: \"95ecdda3-994f-4f40-928c-b7ce7f95896e\") " pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.197081 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f94wm\" (UniqueName: \"kubernetes.io/projected/95ecdda3-994f-4f40-928c-b7ce7f95896e-kube-api-access-f94wm\") pod \"redhat-marketplace-27r44\" (UID: \"95ecdda3-994f-4f40-928c-b7ce7f95896e\") " pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.235124 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.299097 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ecdda3-994f-4f40-928c-b7ce7f95896e-catalog-content\") pod \"redhat-marketplace-27r44\" (UID: \"95ecdda3-994f-4f40-928c-b7ce7f95896e\") " pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.299157 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ecdda3-994f-4f40-928c-b7ce7f95896e-utilities\") pod \"redhat-marketplace-27r44\" (UID: \"95ecdda3-994f-4f40-928c-b7ce7f95896e\") " pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.299235 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f94wm\" (UniqueName: \"kubernetes.io/projected/95ecdda3-994f-4f40-928c-b7ce7f95896e-kube-api-access-f94wm\") pod \"redhat-marketplace-27r44\" (UID: \"95ecdda3-994f-4f40-928c-b7ce7f95896e\") " pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.300327 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ecdda3-994f-4f40-928c-b7ce7f95896e-catalog-content\") pod \"redhat-marketplace-27r44\" (UID: \"95ecdda3-994f-4f40-928c-b7ce7f95896e\") " pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.300512 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ecdda3-994f-4f40-928c-b7ce7f95896e-utilities\") pod \"redhat-marketplace-27r44\" (UID: \"95ecdda3-994f-4f40-928c-b7ce7f95896e\") " pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.323508 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f94wm\" (UniqueName: \"kubernetes.io/projected/95ecdda3-994f-4f40-928c-b7ce7f95896e-kube-api-access-f94wm\") pod \"redhat-marketplace-27r44\" (UID: \"95ecdda3-994f-4f40-928c-b7ce7f95896e\") " pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.396997 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:32 crc kubenswrapper[4832]: I0131 05:35:32.839997 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rjl6n"] Jan 31 05:35:33 crc kubenswrapper[4832]: W0131 05:35:33.007248 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95ecdda3_994f_4f40_928c_b7ce7f95896e.slice/crio-30ed117a792265c670fbc01a8561e8865feea78db1c6202d04625f851a4ae3ac WatchSource:0}: Error finding container 30ed117a792265c670fbc01a8561e8865feea78db1c6202d04625f851a4ae3ac: Status 404 returned error can't find the container with id 30ed117a792265c670fbc01a8561e8865feea78db1c6202d04625f851a4ae3ac Jan 31 05:35:33 crc kubenswrapper[4832]: I0131 05:35:33.016809 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-27r44"] Jan 31 05:35:33 crc kubenswrapper[4832]: I0131 05:35:33.809453 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c917947-392e-42c8-88cf-5918d9450905" containerID="47fdcf2d5db3a44970625b122493d7a392c687876e5662d13535fa0a7ef2f0f7" exitCode=0 Jan 31 05:35:33 crc kubenswrapper[4832]: I0131 05:35:33.809592 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjl6n" event={"ID":"9c917947-392e-42c8-88cf-5918d9450905","Type":"ContainerDied","Data":"47fdcf2d5db3a44970625b122493d7a392c687876e5662d13535fa0a7ef2f0f7"} Jan 31 05:35:33 crc kubenswrapper[4832]: I0131 05:35:33.809903 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjl6n" event={"ID":"9c917947-392e-42c8-88cf-5918d9450905","Type":"ContainerStarted","Data":"966bdb5131e7cb2a76028cf3e29b139053b64478059cb7a4367d7351d7449dd9"} Jan 31 05:35:33 crc kubenswrapper[4832]: I0131 05:35:33.815217 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:35:33 crc kubenswrapper[4832]: I0131 05:35:33.815467 4832 generic.go:334] "Generic (PLEG): container finished" podID="95ecdda3-994f-4f40-928c-b7ce7f95896e" containerID="45f41b4162e54e9f6234d5dda292b61b803078754f611435bd7a07cb8b4ad014" exitCode=0 Jan 31 05:35:33 crc kubenswrapper[4832]: I0131 05:35:33.815649 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27r44" event={"ID":"95ecdda3-994f-4f40-928c-b7ce7f95896e","Type":"ContainerDied","Data":"45f41b4162e54e9f6234d5dda292b61b803078754f611435bd7a07cb8b4ad014"} Jan 31 05:35:33 crc kubenswrapper[4832]: I0131 05:35:33.815690 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27r44" event={"ID":"95ecdda3-994f-4f40-928c-b7ce7f95896e","Type":"ContainerStarted","Data":"30ed117a792265c670fbc01a8561e8865feea78db1c6202d04625f851a4ae3ac"} Jan 31 05:35:34 crc kubenswrapper[4832]: I0131 05:35:34.829607 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjl6n" event={"ID":"9c917947-392e-42c8-88cf-5918d9450905","Type":"ContainerStarted","Data":"00c6e7d7738c47862b17a24a6beed9f3c5a277229e75d873f8a4c38d0083eb9e"} Jan 31 05:35:34 crc kubenswrapper[4832]: I0131 05:35:34.832102 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27r44" event={"ID":"95ecdda3-994f-4f40-928c-b7ce7f95896e","Type":"ContainerStarted","Data":"52f97c91b6dafce595204615105bab811d1bc9e2751ce169605be4cd25eeb5ee"} Jan 31 05:35:35 crc kubenswrapper[4832]: I0131 05:35:35.841998 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c917947-392e-42c8-88cf-5918d9450905" containerID="00c6e7d7738c47862b17a24a6beed9f3c5a277229e75d873f8a4c38d0083eb9e" exitCode=0 Jan 31 05:35:35 crc kubenswrapper[4832]: I0131 05:35:35.842204 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjl6n" event={"ID":"9c917947-392e-42c8-88cf-5918d9450905","Type":"ContainerDied","Data":"00c6e7d7738c47862b17a24a6beed9f3c5a277229e75d873f8a4c38d0083eb9e"} Jan 31 05:35:35 crc kubenswrapper[4832]: I0131 05:35:35.846924 4832 generic.go:334] "Generic (PLEG): container finished" podID="95ecdda3-994f-4f40-928c-b7ce7f95896e" containerID="52f97c91b6dafce595204615105bab811d1bc9e2751ce169605be4cd25eeb5ee" exitCode=0 Jan 31 05:35:35 crc kubenswrapper[4832]: I0131 05:35:35.846956 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27r44" event={"ID":"95ecdda3-994f-4f40-928c-b7ce7f95896e","Type":"ContainerDied","Data":"52f97c91b6dafce595204615105bab811d1bc9e2751ce169605be4cd25eeb5ee"} Jan 31 05:35:36 crc kubenswrapper[4832]: I0131 05:35:36.857442 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjl6n" event={"ID":"9c917947-392e-42c8-88cf-5918d9450905","Type":"ContainerStarted","Data":"cdbba503ca120786bab8a435399de9419866e4a077f1a26bbc2204b7b957d494"} Jan 31 05:35:36 crc kubenswrapper[4832]: I0131 05:35:36.859754 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27r44" event={"ID":"95ecdda3-994f-4f40-928c-b7ce7f95896e","Type":"ContainerStarted","Data":"afb2528d41d281fcd647e5b1f1095f7681c5f02a943f2e75c15b55239b976128"} Jan 31 05:35:36 crc kubenswrapper[4832]: I0131 05:35:36.876696 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rjl6n" podStartSLOduration=3.418369861 podStartE2EDuration="5.87668012s" podCreationTimestamp="2026-01-31 05:35:31 +0000 UTC" firstStartedPulling="2026-01-31 05:35:33.81391348 +0000 UTC m=+3142.762735165" lastFinishedPulling="2026-01-31 05:35:36.272223719 +0000 UTC m=+3145.221045424" observedRunningTime="2026-01-31 05:35:36.873743949 +0000 UTC m=+3145.822565634" watchObservedRunningTime="2026-01-31 05:35:36.87668012 +0000 UTC m=+3145.825501805" Jan 31 05:35:36 crc kubenswrapper[4832]: I0131 05:35:36.892517 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-27r44" podStartSLOduration=2.266305732 podStartE2EDuration="4.892493234s" podCreationTimestamp="2026-01-31 05:35:32 +0000 UTC" firstStartedPulling="2026-01-31 05:35:33.818578536 +0000 UTC m=+3142.767400221" lastFinishedPulling="2026-01-31 05:35:36.444766028 +0000 UTC m=+3145.393587723" observedRunningTime="2026-01-31 05:35:36.887871759 +0000 UTC m=+3145.836693454" watchObservedRunningTime="2026-01-31 05:35:36.892493234 +0000 UTC m=+3145.841314919" Jan 31 05:35:42 crc kubenswrapper[4832]: I0131 05:35:42.236059 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:42 crc kubenswrapper[4832]: I0131 05:35:42.236535 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:42 crc kubenswrapper[4832]: I0131 05:35:42.295853 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:42 crc kubenswrapper[4832]: I0131 05:35:42.398511 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:42 crc kubenswrapper[4832]: I0131 05:35:42.398861 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:42 crc kubenswrapper[4832]: I0131 05:35:42.450499 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:42 crc kubenswrapper[4832]: I0131 05:35:42.956920 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:42 crc kubenswrapper[4832]: I0131 05:35:42.959937 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:44 crc kubenswrapper[4832]: I0131 05:35:44.264078 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rjl6n"] Jan 31 05:35:44 crc kubenswrapper[4832]: I0131 05:35:44.859135 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:35:44 crc kubenswrapper[4832]: E0131 05:35:44.859457 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:35:44 crc kubenswrapper[4832]: I0131 05:35:44.925934 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rjl6n" podUID="9c917947-392e-42c8-88cf-5918d9450905" containerName="registry-server" containerID="cri-o://cdbba503ca120786bab8a435399de9419866e4a077f1a26bbc2204b7b957d494" gracePeriod=2 Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.262146 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-27r44"] Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.262678 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-27r44" podUID="95ecdda3-994f-4f40-928c-b7ce7f95896e" containerName="registry-server" containerID="cri-o://afb2528d41d281fcd647e5b1f1095f7681c5f02a943f2e75c15b55239b976128" gracePeriod=2 Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.442115 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.475902 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c917947-392e-42c8-88cf-5918d9450905-utilities\") pod \"9c917947-392e-42c8-88cf-5918d9450905\" (UID: \"9c917947-392e-42c8-88cf-5918d9450905\") " Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.476297 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx58v\" (UniqueName: \"kubernetes.io/projected/9c917947-392e-42c8-88cf-5918d9450905-kube-api-access-mx58v\") pod \"9c917947-392e-42c8-88cf-5918d9450905\" (UID: \"9c917947-392e-42c8-88cf-5918d9450905\") " Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.476401 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c917947-392e-42c8-88cf-5918d9450905-catalog-content\") pod \"9c917947-392e-42c8-88cf-5918d9450905\" (UID: \"9c917947-392e-42c8-88cf-5918d9450905\") " Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.476777 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c917947-392e-42c8-88cf-5918d9450905-utilities" (OuterVolumeSpecName: "utilities") pod "9c917947-392e-42c8-88cf-5918d9450905" (UID: "9c917947-392e-42c8-88cf-5918d9450905"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.476881 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c917947-392e-42c8-88cf-5918d9450905-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.482374 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c917947-392e-42c8-88cf-5918d9450905-kube-api-access-mx58v" (OuterVolumeSpecName: "kube-api-access-mx58v") pod "9c917947-392e-42c8-88cf-5918d9450905" (UID: "9c917947-392e-42c8-88cf-5918d9450905"). InnerVolumeSpecName "kube-api-access-mx58v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.579305 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx58v\" (UniqueName: \"kubernetes.io/projected/9c917947-392e-42c8-88cf-5918d9450905-kube-api-access-mx58v\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.743530 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.782196 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ecdda3-994f-4f40-928c-b7ce7f95896e-utilities\") pod \"95ecdda3-994f-4f40-928c-b7ce7f95896e\" (UID: \"95ecdda3-994f-4f40-928c-b7ce7f95896e\") " Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.782314 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f94wm\" (UniqueName: \"kubernetes.io/projected/95ecdda3-994f-4f40-928c-b7ce7f95896e-kube-api-access-f94wm\") pod \"95ecdda3-994f-4f40-928c-b7ce7f95896e\" (UID: \"95ecdda3-994f-4f40-928c-b7ce7f95896e\") " Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.782345 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ecdda3-994f-4f40-928c-b7ce7f95896e-catalog-content\") pod \"95ecdda3-994f-4f40-928c-b7ce7f95896e\" (UID: \"95ecdda3-994f-4f40-928c-b7ce7f95896e\") " Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.785936 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95ecdda3-994f-4f40-928c-b7ce7f95896e-kube-api-access-f94wm" (OuterVolumeSpecName: "kube-api-access-f94wm") pod "95ecdda3-994f-4f40-928c-b7ce7f95896e" (UID: "95ecdda3-994f-4f40-928c-b7ce7f95896e"). InnerVolumeSpecName "kube-api-access-f94wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.786295 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95ecdda3-994f-4f40-928c-b7ce7f95896e-utilities" (OuterVolumeSpecName: "utilities") pod "95ecdda3-994f-4f40-928c-b7ce7f95896e" (UID: "95ecdda3-994f-4f40-928c-b7ce7f95896e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.884236 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f94wm\" (UniqueName: \"kubernetes.io/projected/95ecdda3-994f-4f40-928c-b7ce7f95896e-kube-api-access-f94wm\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.884296 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ecdda3-994f-4f40-928c-b7ce7f95896e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.938940 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c917947-392e-42c8-88cf-5918d9450905" containerID="cdbba503ca120786bab8a435399de9419866e4a077f1a26bbc2204b7b957d494" exitCode=0 Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.938992 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjl6n" event={"ID":"9c917947-392e-42c8-88cf-5918d9450905","Type":"ContainerDied","Data":"cdbba503ca120786bab8a435399de9419866e4a077f1a26bbc2204b7b957d494"} Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.939051 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjl6n" Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.939084 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjl6n" event={"ID":"9c917947-392e-42c8-88cf-5918d9450905","Type":"ContainerDied","Data":"966bdb5131e7cb2a76028cf3e29b139053b64478059cb7a4367d7351d7449dd9"} Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.939125 4832 scope.go:117] "RemoveContainer" containerID="cdbba503ca120786bab8a435399de9419866e4a077f1a26bbc2204b7b957d494" Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.943649 4832 generic.go:334] "Generic (PLEG): container finished" podID="95ecdda3-994f-4f40-928c-b7ce7f95896e" containerID="afb2528d41d281fcd647e5b1f1095f7681c5f02a943f2e75c15b55239b976128" exitCode=0 Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.943689 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27r44" event={"ID":"95ecdda3-994f-4f40-928c-b7ce7f95896e","Type":"ContainerDied","Data":"afb2528d41d281fcd647e5b1f1095f7681c5f02a943f2e75c15b55239b976128"} Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.943724 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27r44" event={"ID":"95ecdda3-994f-4f40-928c-b7ce7f95896e","Type":"ContainerDied","Data":"30ed117a792265c670fbc01a8561e8865feea78db1c6202d04625f851a4ae3ac"} Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.943829 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27r44" Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.967296 4832 scope.go:117] "RemoveContainer" containerID="00c6e7d7738c47862b17a24a6beed9f3c5a277229e75d873f8a4c38d0083eb9e" Jan 31 05:35:45 crc kubenswrapper[4832]: I0131 05:35:45.993361 4832 scope.go:117] "RemoveContainer" containerID="47fdcf2d5db3a44970625b122493d7a392c687876e5662d13535fa0a7ef2f0f7" Jan 31 05:35:46 crc kubenswrapper[4832]: I0131 05:35:46.016043 4832 scope.go:117] "RemoveContainer" containerID="cdbba503ca120786bab8a435399de9419866e4a077f1a26bbc2204b7b957d494" Jan 31 05:35:46 crc kubenswrapper[4832]: E0131 05:35:46.016689 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdbba503ca120786bab8a435399de9419866e4a077f1a26bbc2204b7b957d494\": container with ID starting with cdbba503ca120786bab8a435399de9419866e4a077f1a26bbc2204b7b957d494 not found: ID does not exist" containerID="cdbba503ca120786bab8a435399de9419866e4a077f1a26bbc2204b7b957d494" Jan 31 05:35:46 crc kubenswrapper[4832]: I0131 05:35:46.016739 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdbba503ca120786bab8a435399de9419866e4a077f1a26bbc2204b7b957d494"} err="failed to get container status \"cdbba503ca120786bab8a435399de9419866e4a077f1a26bbc2204b7b957d494\": rpc error: code = NotFound desc = could not find container \"cdbba503ca120786bab8a435399de9419866e4a077f1a26bbc2204b7b957d494\": container with ID starting with cdbba503ca120786bab8a435399de9419866e4a077f1a26bbc2204b7b957d494 not found: ID does not exist" Jan 31 05:35:46 crc kubenswrapper[4832]: I0131 05:35:46.016773 4832 scope.go:117] "RemoveContainer" containerID="00c6e7d7738c47862b17a24a6beed9f3c5a277229e75d873f8a4c38d0083eb9e" Jan 31 05:35:46 crc kubenswrapper[4832]: E0131 05:35:46.017257 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00c6e7d7738c47862b17a24a6beed9f3c5a277229e75d873f8a4c38d0083eb9e\": container with ID starting with 00c6e7d7738c47862b17a24a6beed9f3c5a277229e75d873f8a4c38d0083eb9e not found: ID does not exist" containerID="00c6e7d7738c47862b17a24a6beed9f3c5a277229e75d873f8a4c38d0083eb9e" Jan 31 05:35:46 crc kubenswrapper[4832]: I0131 05:35:46.017299 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00c6e7d7738c47862b17a24a6beed9f3c5a277229e75d873f8a4c38d0083eb9e"} err="failed to get container status \"00c6e7d7738c47862b17a24a6beed9f3c5a277229e75d873f8a4c38d0083eb9e\": rpc error: code = NotFound desc = could not find container \"00c6e7d7738c47862b17a24a6beed9f3c5a277229e75d873f8a4c38d0083eb9e\": container with ID starting with 00c6e7d7738c47862b17a24a6beed9f3c5a277229e75d873f8a4c38d0083eb9e not found: ID does not exist" Jan 31 05:35:46 crc kubenswrapper[4832]: I0131 05:35:46.017325 4832 scope.go:117] "RemoveContainer" containerID="47fdcf2d5db3a44970625b122493d7a392c687876e5662d13535fa0a7ef2f0f7" Jan 31 05:35:46 crc kubenswrapper[4832]: E0131 05:35:46.017624 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47fdcf2d5db3a44970625b122493d7a392c687876e5662d13535fa0a7ef2f0f7\": container with ID starting with 47fdcf2d5db3a44970625b122493d7a392c687876e5662d13535fa0a7ef2f0f7 not found: ID does not exist" containerID="47fdcf2d5db3a44970625b122493d7a392c687876e5662d13535fa0a7ef2f0f7" Jan 31 05:35:46 crc kubenswrapper[4832]: I0131 05:35:46.017684 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47fdcf2d5db3a44970625b122493d7a392c687876e5662d13535fa0a7ef2f0f7"} err="failed to get container status \"47fdcf2d5db3a44970625b122493d7a392c687876e5662d13535fa0a7ef2f0f7\": rpc error: code = NotFound desc = could not find container \"47fdcf2d5db3a44970625b122493d7a392c687876e5662d13535fa0a7ef2f0f7\": container with ID starting with 47fdcf2d5db3a44970625b122493d7a392c687876e5662d13535fa0a7ef2f0f7 not found: ID does not exist" Jan 31 05:35:46 crc kubenswrapper[4832]: I0131 05:35:46.017730 4832 scope.go:117] "RemoveContainer" containerID="afb2528d41d281fcd647e5b1f1095f7681c5f02a943f2e75c15b55239b976128" Jan 31 05:35:46 crc kubenswrapper[4832]: I0131 05:35:46.041368 4832 scope.go:117] "RemoveContainer" containerID="52f97c91b6dafce595204615105bab811d1bc9e2751ce169605be4cd25eeb5ee" Jan 31 05:35:46 crc kubenswrapper[4832]: I0131 05:35:46.102718 4832 scope.go:117] "RemoveContainer" containerID="45f41b4162e54e9f6234d5dda292b61b803078754f611435bd7a07cb8b4ad014" Jan 31 05:35:46 crc kubenswrapper[4832]: I0131 05:35:46.142826 4832 scope.go:117] "RemoveContainer" containerID="afb2528d41d281fcd647e5b1f1095f7681c5f02a943f2e75c15b55239b976128" Jan 31 05:35:46 crc kubenswrapper[4832]: E0131 05:35:46.143308 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb2528d41d281fcd647e5b1f1095f7681c5f02a943f2e75c15b55239b976128\": container with ID starting with afb2528d41d281fcd647e5b1f1095f7681c5f02a943f2e75c15b55239b976128 not found: ID does not exist" containerID="afb2528d41d281fcd647e5b1f1095f7681c5f02a943f2e75c15b55239b976128" Jan 31 05:35:46 crc kubenswrapper[4832]: I0131 05:35:46.143358 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb2528d41d281fcd647e5b1f1095f7681c5f02a943f2e75c15b55239b976128"} err="failed to get container status \"afb2528d41d281fcd647e5b1f1095f7681c5f02a943f2e75c15b55239b976128\": rpc error: code = NotFound desc = could not find container \"afb2528d41d281fcd647e5b1f1095f7681c5f02a943f2e75c15b55239b976128\": container with ID starting with afb2528d41d281fcd647e5b1f1095f7681c5f02a943f2e75c15b55239b976128 not found: ID does not exist" Jan 31 05:35:46 crc kubenswrapper[4832]: I0131 05:35:46.143390 4832 scope.go:117] "RemoveContainer" containerID="52f97c91b6dafce595204615105bab811d1bc9e2751ce169605be4cd25eeb5ee" Jan 31 05:35:46 crc kubenswrapper[4832]: E0131 05:35:46.143647 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52f97c91b6dafce595204615105bab811d1bc9e2751ce169605be4cd25eeb5ee\": container with ID starting with 52f97c91b6dafce595204615105bab811d1bc9e2751ce169605be4cd25eeb5ee not found: ID does not exist" containerID="52f97c91b6dafce595204615105bab811d1bc9e2751ce169605be4cd25eeb5ee" Jan 31 05:35:46 crc kubenswrapper[4832]: I0131 05:35:46.143673 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52f97c91b6dafce595204615105bab811d1bc9e2751ce169605be4cd25eeb5ee"} err="failed to get container status \"52f97c91b6dafce595204615105bab811d1bc9e2751ce169605be4cd25eeb5ee\": rpc error: code = NotFound desc = could not find container \"52f97c91b6dafce595204615105bab811d1bc9e2751ce169605be4cd25eeb5ee\": container with ID starting with 52f97c91b6dafce595204615105bab811d1bc9e2751ce169605be4cd25eeb5ee not found: ID does not exist" Jan 31 05:35:46 crc kubenswrapper[4832]: I0131 05:35:46.143689 4832 scope.go:117] "RemoveContainer" containerID="45f41b4162e54e9f6234d5dda292b61b803078754f611435bd7a07cb8b4ad014" Jan 31 05:35:46 crc kubenswrapper[4832]: E0131 05:35:46.144326 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45f41b4162e54e9f6234d5dda292b61b803078754f611435bd7a07cb8b4ad014\": container with ID starting with 45f41b4162e54e9f6234d5dda292b61b803078754f611435bd7a07cb8b4ad014 not found: ID does not exist" containerID="45f41b4162e54e9f6234d5dda292b61b803078754f611435bd7a07cb8b4ad014" Jan 31 05:35:46 crc kubenswrapper[4832]: I0131 05:35:46.144357 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f41b4162e54e9f6234d5dda292b61b803078754f611435bd7a07cb8b4ad014"} err="failed to get container status \"45f41b4162e54e9f6234d5dda292b61b803078754f611435bd7a07cb8b4ad014\": rpc error: code = NotFound desc = could not find container \"45f41b4162e54e9f6234d5dda292b61b803078754f611435bd7a07cb8b4ad014\": container with ID starting with 45f41b4162e54e9f6234d5dda292b61b803078754f611435bd7a07cb8b4ad014 not found: ID does not exist" Jan 31 05:35:47 crc kubenswrapper[4832]: I0131 05:35:47.133900 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95ecdda3-994f-4f40-928c-b7ce7f95896e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95ecdda3-994f-4f40-928c-b7ce7f95896e" (UID: "95ecdda3-994f-4f40-928c-b7ce7f95896e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:35:47 crc kubenswrapper[4832]: I0131 05:35:47.167978 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c917947-392e-42c8-88cf-5918d9450905-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c917947-392e-42c8-88cf-5918d9450905" (UID: "9c917947-392e-42c8-88cf-5918d9450905"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:35:47 crc kubenswrapper[4832]: I0131 05:35:47.197367 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-27r44"] Jan 31 05:35:47 crc kubenswrapper[4832]: I0131 05:35:47.210490 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c917947-392e-42c8-88cf-5918d9450905-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:47 crc kubenswrapper[4832]: I0131 05:35:47.210518 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ecdda3-994f-4f40-928c-b7ce7f95896e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:35:47 crc kubenswrapper[4832]: I0131 05:35:47.220649 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-27r44"] Jan 31 05:35:47 crc kubenswrapper[4832]: I0131 05:35:47.473223 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rjl6n"] Jan 31 05:35:47 crc kubenswrapper[4832]: I0131 05:35:47.482166 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rjl6n"] Jan 31 05:35:47 crc kubenswrapper[4832]: I0131 05:35:47.870761 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95ecdda3-994f-4f40-928c-b7ce7f95896e" path="/var/lib/kubelet/pods/95ecdda3-994f-4f40-928c-b7ce7f95896e/volumes" Jan 31 05:35:47 crc kubenswrapper[4832]: I0131 05:35:47.871388 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c917947-392e-42c8-88cf-5918d9450905" path="/var/lib/kubelet/pods/9c917947-392e-42c8-88cf-5918d9450905/volumes" Jan 31 05:35:59 crc kubenswrapper[4832]: I0131 05:35:59.860806 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:35:59 crc kubenswrapper[4832]: E0131 05:35:59.861550 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:36:13 crc kubenswrapper[4832]: I0131 05:36:13.859620 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:36:13 crc kubenswrapper[4832]: E0131 05:36:13.860713 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:36:25 crc kubenswrapper[4832]: I0131 05:36:25.859334 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:36:25 crc kubenswrapper[4832]: E0131 05:36:25.860329 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:36:37 crc kubenswrapper[4832]: I0131 05:36:37.863638 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:36:37 crc kubenswrapper[4832]: E0131 05:36:37.864463 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:36:49 crc kubenswrapper[4832]: I0131 05:36:49.859698 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:36:50 crc kubenswrapper[4832]: I0131 05:36:50.564755 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"f5f05e052d2c0ca776347fca83a19c093bdcddf6eb7b337619933b8b58eba1c6"} Jan 31 05:38:03 crc kubenswrapper[4832]: I0131 05:38:03.911412 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9wz82"] Jan 31 05:38:03 crc kubenswrapper[4832]: E0131 05:38:03.912396 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ecdda3-994f-4f40-928c-b7ce7f95896e" containerName="extract-content" Jan 31 05:38:03 crc kubenswrapper[4832]: I0131 05:38:03.912416 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ecdda3-994f-4f40-928c-b7ce7f95896e" containerName="extract-content" Jan 31 05:38:03 crc kubenswrapper[4832]: E0131 05:38:03.912457 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c917947-392e-42c8-88cf-5918d9450905" containerName="extract-utilities" Jan 31 05:38:03 crc kubenswrapper[4832]: I0131 05:38:03.912466 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c917947-392e-42c8-88cf-5918d9450905" containerName="extract-utilities" Jan 31 05:38:03 crc kubenswrapper[4832]: E0131 05:38:03.912479 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ecdda3-994f-4f40-928c-b7ce7f95896e" containerName="registry-server" Jan 31 05:38:03 crc kubenswrapper[4832]: I0131 05:38:03.912488 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ecdda3-994f-4f40-928c-b7ce7f95896e" containerName="registry-server" Jan 31 05:38:03 crc kubenswrapper[4832]: E0131 05:38:03.912507 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ecdda3-994f-4f40-928c-b7ce7f95896e" containerName="extract-utilities" Jan 31 05:38:03 crc kubenswrapper[4832]: I0131 05:38:03.912518 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ecdda3-994f-4f40-928c-b7ce7f95896e" containerName="extract-utilities" Jan 31 05:38:03 crc kubenswrapper[4832]: E0131 05:38:03.912535 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c917947-392e-42c8-88cf-5918d9450905" containerName="extract-content" Jan 31 05:38:03 crc kubenswrapper[4832]: I0131 05:38:03.912545 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c917947-392e-42c8-88cf-5918d9450905" containerName="extract-content" Jan 31 05:38:03 crc kubenswrapper[4832]: E0131 05:38:03.912604 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c917947-392e-42c8-88cf-5918d9450905" containerName="registry-server" Jan 31 05:38:03 crc kubenswrapper[4832]: I0131 05:38:03.912616 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c917947-392e-42c8-88cf-5918d9450905" containerName="registry-server" Jan 31 05:38:03 crc kubenswrapper[4832]: I0131 05:38:03.912914 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="95ecdda3-994f-4f40-928c-b7ce7f95896e" containerName="registry-server" Jan 31 05:38:03 crc kubenswrapper[4832]: I0131 05:38:03.912944 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c917947-392e-42c8-88cf-5918d9450905" containerName="registry-server" Jan 31 05:38:03 crc kubenswrapper[4832]: I0131 05:38:03.914871 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:03 crc kubenswrapper[4832]: I0131 05:38:03.918933 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9wz82"] Jan 31 05:38:03 crc kubenswrapper[4832]: I0131 05:38:03.949108 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d33155cf-f884-4cc1-96d6-e85809f9af7b-catalog-content\") pod \"community-operators-9wz82\" (UID: \"d33155cf-f884-4cc1-96d6-e85809f9af7b\") " pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:03 crc kubenswrapper[4832]: I0131 05:38:03.949164 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d33155cf-f884-4cc1-96d6-e85809f9af7b-utilities\") pod \"community-operators-9wz82\" (UID: \"d33155cf-f884-4cc1-96d6-e85809f9af7b\") " pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:03 crc kubenswrapper[4832]: I0131 05:38:03.949545 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsspn\" (UniqueName: \"kubernetes.io/projected/d33155cf-f884-4cc1-96d6-e85809f9af7b-kube-api-access-jsspn\") pod \"community-operators-9wz82\" (UID: \"d33155cf-f884-4cc1-96d6-e85809f9af7b\") " pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:04 crc kubenswrapper[4832]: I0131 05:38:04.051184 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d33155cf-f884-4cc1-96d6-e85809f9af7b-catalog-content\") pod \"community-operators-9wz82\" (UID: \"d33155cf-f884-4cc1-96d6-e85809f9af7b\") " pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:04 crc kubenswrapper[4832]: I0131 05:38:04.051232 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d33155cf-f884-4cc1-96d6-e85809f9af7b-utilities\") pod \"community-operators-9wz82\" (UID: \"d33155cf-f884-4cc1-96d6-e85809f9af7b\") " pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:04 crc kubenswrapper[4832]: I0131 05:38:04.051316 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsspn\" (UniqueName: \"kubernetes.io/projected/d33155cf-f884-4cc1-96d6-e85809f9af7b-kube-api-access-jsspn\") pod \"community-operators-9wz82\" (UID: \"d33155cf-f884-4cc1-96d6-e85809f9af7b\") " pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:04 crc kubenswrapper[4832]: I0131 05:38:04.051891 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d33155cf-f884-4cc1-96d6-e85809f9af7b-utilities\") pod \"community-operators-9wz82\" (UID: \"d33155cf-f884-4cc1-96d6-e85809f9af7b\") " pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:04 crc kubenswrapper[4832]: I0131 05:38:04.051904 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d33155cf-f884-4cc1-96d6-e85809f9af7b-catalog-content\") pod \"community-operators-9wz82\" (UID: \"d33155cf-f884-4cc1-96d6-e85809f9af7b\") " pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:04 crc kubenswrapper[4832]: I0131 05:38:04.071993 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsspn\" (UniqueName: \"kubernetes.io/projected/d33155cf-f884-4cc1-96d6-e85809f9af7b-kube-api-access-jsspn\") pod \"community-operators-9wz82\" (UID: \"d33155cf-f884-4cc1-96d6-e85809f9af7b\") " pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:04 crc kubenswrapper[4832]: I0131 05:38:04.236061 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:04 crc kubenswrapper[4832]: W0131 05:38:04.749652 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd33155cf_f884_4cc1_96d6_e85809f9af7b.slice/crio-7a1838f29a14a1168704ada4b9f030be5a5c0cdb9577756f3af65eaf24eebbe4 WatchSource:0}: Error finding container 7a1838f29a14a1168704ada4b9f030be5a5c0cdb9577756f3af65eaf24eebbe4: Status 404 returned error can't find the container with id 7a1838f29a14a1168704ada4b9f030be5a5c0cdb9577756f3af65eaf24eebbe4 Jan 31 05:38:04 crc kubenswrapper[4832]: I0131 05:38:04.755660 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9wz82"] Jan 31 05:38:04 crc kubenswrapper[4832]: I0131 05:38:04.766035 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wz82" event={"ID":"d33155cf-f884-4cc1-96d6-e85809f9af7b","Type":"ContainerStarted","Data":"7a1838f29a14a1168704ada4b9f030be5a5c0cdb9577756f3af65eaf24eebbe4"} Jan 31 05:38:05 crc kubenswrapper[4832]: I0131 05:38:05.778119 4832 generic.go:334] "Generic (PLEG): container finished" podID="d33155cf-f884-4cc1-96d6-e85809f9af7b" containerID="6848ff21005d39ae800d2fbae12a1b8de82863657d11c08ed945faaab9421183" exitCode=0 Jan 31 05:38:05 crc kubenswrapper[4832]: I0131 05:38:05.778169 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wz82" event={"ID":"d33155cf-f884-4cc1-96d6-e85809f9af7b","Type":"ContainerDied","Data":"6848ff21005d39ae800d2fbae12a1b8de82863657d11c08ed945faaab9421183"} Jan 31 05:38:07 crc kubenswrapper[4832]: I0131 05:38:07.794538 4832 generic.go:334] "Generic (PLEG): container finished" podID="d33155cf-f884-4cc1-96d6-e85809f9af7b" containerID="f81e797990a7f78edafee8958600d6bf3990165f66223bdeabb1df5ec43d2f0a" exitCode=0 Jan 31 05:38:07 crc kubenswrapper[4832]: I0131 05:38:07.794585 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wz82" event={"ID":"d33155cf-f884-4cc1-96d6-e85809f9af7b","Type":"ContainerDied","Data":"f81e797990a7f78edafee8958600d6bf3990165f66223bdeabb1df5ec43d2f0a"} Jan 31 05:38:08 crc kubenswrapper[4832]: I0131 05:38:08.804848 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wz82" event={"ID":"d33155cf-f884-4cc1-96d6-e85809f9af7b","Type":"ContainerStarted","Data":"1b393be3cc19d0d4d893ed9f1ca336f6fe4dc020e1cb498332c1186a94287a2c"} Jan 31 05:38:08 crc kubenswrapper[4832]: I0131 05:38:08.838554 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9wz82" podStartSLOduration=3.414235124 podStartE2EDuration="5.838522691s" podCreationTimestamp="2026-01-31 05:38:03 +0000 UTC" firstStartedPulling="2026-01-31 05:38:05.780922823 +0000 UTC m=+3294.729744518" lastFinishedPulling="2026-01-31 05:38:08.2052104 +0000 UTC m=+3297.154032085" observedRunningTime="2026-01-31 05:38:08.829471279 +0000 UTC m=+3297.778292964" watchObservedRunningTime="2026-01-31 05:38:08.838522691 +0000 UTC m=+3297.787344416" Jan 31 05:38:14 crc kubenswrapper[4832]: I0131 05:38:14.236497 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:14 crc kubenswrapper[4832]: I0131 05:38:14.237102 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:14 crc kubenswrapper[4832]: I0131 05:38:14.283515 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:14 crc kubenswrapper[4832]: I0131 05:38:14.922243 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:14 crc kubenswrapper[4832]: I0131 05:38:14.987727 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9wz82"] Jan 31 05:38:16 crc kubenswrapper[4832]: I0131 05:38:16.880029 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9wz82" podUID="d33155cf-f884-4cc1-96d6-e85809f9af7b" containerName="registry-server" containerID="cri-o://1b393be3cc19d0d4d893ed9f1ca336f6fe4dc020e1cb498332c1186a94287a2c" gracePeriod=2 Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.419525 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.447050 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d33155cf-f884-4cc1-96d6-e85809f9af7b-catalog-content\") pod \"d33155cf-f884-4cc1-96d6-e85809f9af7b\" (UID: \"d33155cf-f884-4cc1-96d6-e85809f9af7b\") " Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.447130 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d33155cf-f884-4cc1-96d6-e85809f9af7b-utilities\") pod \"d33155cf-f884-4cc1-96d6-e85809f9af7b\" (UID: \"d33155cf-f884-4cc1-96d6-e85809f9af7b\") " Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.447523 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsspn\" (UniqueName: \"kubernetes.io/projected/d33155cf-f884-4cc1-96d6-e85809f9af7b-kube-api-access-jsspn\") pod \"d33155cf-f884-4cc1-96d6-e85809f9af7b\" (UID: \"d33155cf-f884-4cc1-96d6-e85809f9af7b\") " Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.448050 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d33155cf-f884-4cc1-96d6-e85809f9af7b-utilities" (OuterVolumeSpecName: "utilities") pod "d33155cf-f884-4cc1-96d6-e85809f9af7b" (UID: "d33155cf-f884-4cc1-96d6-e85809f9af7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.448358 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d33155cf-f884-4cc1-96d6-e85809f9af7b-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.462928 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33155cf-f884-4cc1-96d6-e85809f9af7b-kube-api-access-jsspn" (OuterVolumeSpecName: "kube-api-access-jsspn") pod "d33155cf-f884-4cc1-96d6-e85809f9af7b" (UID: "d33155cf-f884-4cc1-96d6-e85809f9af7b"). InnerVolumeSpecName "kube-api-access-jsspn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.525354 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d33155cf-f884-4cc1-96d6-e85809f9af7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d33155cf-f884-4cc1-96d6-e85809f9af7b" (UID: "d33155cf-f884-4cc1-96d6-e85809f9af7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.550881 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d33155cf-f884-4cc1-96d6-e85809f9af7b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.550934 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsspn\" (UniqueName: \"kubernetes.io/projected/d33155cf-f884-4cc1-96d6-e85809f9af7b-kube-api-access-jsspn\") on node \"crc\" DevicePath \"\"" Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.891360 4832 generic.go:334] "Generic (PLEG): container finished" podID="d33155cf-f884-4cc1-96d6-e85809f9af7b" containerID="1b393be3cc19d0d4d893ed9f1ca336f6fe4dc020e1cb498332c1186a94287a2c" exitCode=0 Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.891406 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wz82" event={"ID":"d33155cf-f884-4cc1-96d6-e85809f9af7b","Type":"ContainerDied","Data":"1b393be3cc19d0d4d893ed9f1ca336f6fe4dc020e1cb498332c1186a94287a2c"} Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.891442 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wz82" event={"ID":"d33155cf-f884-4cc1-96d6-e85809f9af7b","Type":"ContainerDied","Data":"7a1838f29a14a1168704ada4b9f030be5a5c0cdb9577756f3af65eaf24eebbe4"} Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.891464 4832 scope.go:117] "RemoveContainer" containerID="1b393be3cc19d0d4d893ed9f1ca336f6fe4dc020e1cb498332c1186a94287a2c" Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.891469 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9wz82" Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.924944 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9wz82"] Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.936121 4832 scope.go:117] "RemoveContainer" containerID="f81e797990a7f78edafee8958600d6bf3990165f66223bdeabb1df5ec43d2f0a" Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.938027 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9wz82"] Jan 31 05:38:17 crc kubenswrapper[4832]: I0131 05:38:17.961690 4832 scope.go:117] "RemoveContainer" containerID="6848ff21005d39ae800d2fbae12a1b8de82863657d11c08ed945faaab9421183" Jan 31 05:38:18 crc kubenswrapper[4832]: I0131 05:38:18.033143 4832 scope.go:117] "RemoveContainer" containerID="1b393be3cc19d0d4d893ed9f1ca336f6fe4dc020e1cb498332c1186a94287a2c" Jan 31 05:38:18 crc kubenswrapper[4832]: E0131 05:38:18.033601 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b393be3cc19d0d4d893ed9f1ca336f6fe4dc020e1cb498332c1186a94287a2c\": container with ID starting with 1b393be3cc19d0d4d893ed9f1ca336f6fe4dc020e1cb498332c1186a94287a2c not found: ID does not exist" containerID="1b393be3cc19d0d4d893ed9f1ca336f6fe4dc020e1cb498332c1186a94287a2c" Jan 31 05:38:18 crc kubenswrapper[4832]: I0131 05:38:18.033634 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b393be3cc19d0d4d893ed9f1ca336f6fe4dc020e1cb498332c1186a94287a2c"} err="failed to get container status \"1b393be3cc19d0d4d893ed9f1ca336f6fe4dc020e1cb498332c1186a94287a2c\": rpc error: code = NotFound desc = could not find container \"1b393be3cc19d0d4d893ed9f1ca336f6fe4dc020e1cb498332c1186a94287a2c\": container with ID starting with 1b393be3cc19d0d4d893ed9f1ca336f6fe4dc020e1cb498332c1186a94287a2c not found: ID does not exist" Jan 31 05:38:18 crc kubenswrapper[4832]: I0131 05:38:18.033655 4832 scope.go:117] "RemoveContainer" containerID="f81e797990a7f78edafee8958600d6bf3990165f66223bdeabb1df5ec43d2f0a" Jan 31 05:38:18 crc kubenswrapper[4832]: E0131 05:38:18.035550 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f81e797990a7f78edafee8958600d6bf3990165f66223bdeabb1df5ec43d2f0a\": container with ID starting with f81e797990a7f78edafee8958600d6bf3990165f66223bdeabb1df5ec43d2f0a not found: ID does not exist" containerID="f81e797990a7f78edafee8958600d6bf3990165f66223bdeabb1df5ec43d2f0a" Jan 31 05:38:18 crc kubenswrapper[4832]: I0131 05:38:18.035623 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f81e797990a7f78edafee8958600d6bf3990165f66223bdeabb1df5ec43d2f0a"} err="failed to get container status \"f81e797990a7f78edafee8958600d6bf3990165f66223bdeabb1df5ec43d2f0a\": rpc error: code = NotFound desc = could not find container \"f81e797990a7f78edafee8958600d6bf3990165f66223bdeabb1df5ec43d2f0a\": container with ID starting with f81e797990a7f78edafee8958600d6bf3990165f66223bdeabb1df5ec43d2f0a not found: ID does not exist" Jan 31 05:38:18 crc kubenswrapper[4832]: I0131 05:38:18.035655 4832 scope.go:117] "RemoveContainer" containerID="6848ff21005d39ae800d2fbae12a1b8de82863657d11c08ed945faaab9421183" Jan 31 05:38:18 crc kubenswrapper[4832]: E0131 05:38:18.036007 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6848ff21005d39ae800d2fbae12a1b8de82863657d11c08ed945faaab9421183\": container with ID starting with 6848ff21005d39ae800d2fbae12a1b8de82863657d11c08ed945faaab9421183 not found: ID does not exist" containerID="6848ff21005d39ae800d2fbae12a1b8de82863657d11c08ed945faaab9421183" Jan 31 05:38:18 crc kubenswrapper[4832]: I0131 05:38:18.036038 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6848ff21005d39ae800d2fbae12a1b8de82863657d11c08ed945faaab9421183"} err="failed to get container status \"6848ff21005d39ae800d2fbae12a1b8de82863657d11c08ed945faaab9421183\": rpc error: code = NotFound desc = could not find container \"6848ff21005d39ae800d2fbae12a1b8de82863657d11c08ed945faaab9421183\": container with ID starting with 6848ff21005d39ae800d2fbae12a1b8de82863657d11c08ed945faaab9421183 not found: ID does not exist" Jan 31 05:38:18 crc kubenswrapper[4832]: E0131 05:38:18.090493 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd33155cf_f884_4cc1_96d6_e85809f9af7b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd33155cf_f884_4cc1_96d6_e85809f9af7b.slice/crio-7a1838f29a14a1168704ada4b9f030be5a5c0cdb9577756f3af65eaf24eebbe4\": RecentStats: unable to find data in memory cache]" Jan 31 05:38:19 crc kubenswrapper[4832]: I0131 05:38:19.878221 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d33155cf-f884-4cc1-96d6-e85809f9af7b" path="/var/lib/kubelet/pods/d33155cf-f884-4cc1-96d6-e85809f9af7b/volumes" Jan 31 05:39:08 crc kubenswrapper[4832]: I0131 05:39:08.397465 4832 generic.go:334] "Generic (PLEG): container finished" podID="cf637281-101a-4e11-93b6-74f55d914798" containerID="4dbe5d754eb37dc90b45f261fd74b5577857c3c0040bfd8721da90a191637d5f" exitCode=0 Jan 31 05:39:08 crc kubenswrapper[4832]: I0131 05:39:08.397570 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"cf637281-101a-4e11-93b6-74f55d914798","Type":"ContainerDied","Data":"4dbe5d754eb37dc90b45f261fd74b5577857c3c0040bfd8721da90a191637d5f"} Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.807179 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.910086 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-openstack-config-secret\") pod \"cf637281-101a-4e11-93b6-74f55d914798\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.910132 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf637281-101a-4e11-93b6-74f55d914798-config-data\") pod \"cf637281-101a-4e11-93b6-74f55d914798\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.910156 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-ssh-key\") pod \"cf637281-101a-4e11-93b6-74f55d914798\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.910187 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/cf637281-101a-4e11-93b6-74f55d914798-test-operator-ephemeral-workdir\") pod \"cf637281-101a-4e11-93b6-74f55d914798\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.910292 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/cf637281-101a-4e11-93b6-74f55d914798-test-operator-ephemeral-temporary\") pod \"cf637281-101a-4e11-93b6-74f55d914798\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.910505 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpr9k\" (UniqueName: \"kubernetes.io/projected/cf637281-101a-4e11-93b6-74f55d914798-kube-api-access-mpr9k\") pod \"cf637281-101a-4e11-93b6-74f55d914798\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.910546 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf637281-101a-4e11-93b6-74f55d914798-openstack-config\") pod \"cf637281-101a-4e11-93b6-74f55d914798\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.910607 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-ca-certs\") pod \"cf637281-101a-4e11-93b6-74f55d914798\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.910642 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cf637281-101a-4e11-93b6-74f55d914798\" (UID: \"cf637281-101a-4e11-93b6-74f55d914798\") " Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.911574 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf637281-101a-4e11-93b6-74f55d914798-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "cf637281-101a-4e11-93b6-74f55d914798" (UID: "cf637281-101a-4e11-93b6-74f55d914798"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.912520 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf637281-101a-4e11-93b6-74f55d914798-config-data" (OuterVolumeSpecName: "config-data") pod "cf637281-101a-4e11-93b6-74f55d914798" (UID: "cf637281-101a-4e11-93b6-74f55d914798"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.916047 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf637281-101a-4e11-93b6-74f55d914798-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "cf637281-101a-4e11-93b6-74f55d914798" (UID: "cf637281-101a-4e11-93b6-74f55d914798"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.916048 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf637281-101a-4e11-93b6-74f55d914798-kube-api-access-mpr9k" (OuterVolumeSpecName: "kube-api-access-mpr9k") pod "cf637281-101a-4e11-93b6-74f55d914798" (UID: "cf637281-101a-4e11-93b6-74f55d914798"). InnerVolumeSpecName "kube-api-access-mpr9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.916735 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "cf637281-101a-4e11-93b6-74f55d914798" (UID: "cf637281-101a-4e11-93b6-74f55d914798"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.939872 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cf637281-101a-4e11-93b6-74f55d914798" (UID: "cf637281-101a-4e11-93b6-74f55d914798"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.940713 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "cf637281-101a-4e11-93b6-74f55d914798" (UID: "cf637281-101a-4e11-93b6-74f55d914798"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.941419 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "cf637281-101a-4e11-93b6-74f55d914798" (UID: "cf637281-101a-4e11-93b6-74f55d914798"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:39:09 crc kubenswrapper[4832]: I0131 05:39:09.958727 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf637281-101a-4e11-93b6-74f55d914798-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "cf637281-101a-4e11-93b6-74f55d914798" (UID: "cf637281-101a-4e11-93b6-74f55d914798"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:39:10 crc kubenswrapper[4832]: I0131 05:39:10.013120 4832 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/cf637281-101a-4e11-93b6-74f55d914798-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:10 crc kubenswrapper[4832]: I0131 05:39:10.013161 4832 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/cf637281-101a-4e11-93b6-74f55d914798-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:10 crc kubenswrapper[4832]: I0131 05:39:10.013172 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpr9k\" (UniqueName: \"kubernetes.io/projected/cf637281-101a-4e11-93b6-74f55d914798-kube-api-access-mpr9k\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:10 crc kubenswrapper[4832]: I0131 05:39:10.013182 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf637281-101a-4e11-93b6-74f55d914798-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:10 crc kubenswrapper[4832]: I0131 05:39:10.013191 4832 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:10 crc kubenswrapper[4832]: I0131 05:39:10.013211 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 31 05:39:10 crc kubenswrapper[4832]: I0131 05:39:10.013221 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:10 crc kubenswrapper[4832]: I0131 05:39:10.013229 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf637281-101a-4e11-93b6-74f55d914798-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:10 crc kubenswrapper[4832]: I0131 05:39:10.013237 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf637281-101a-4e11-93b6-74f55d914798-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:10 crc kubenswrapper[4832]: I0131 05:39:10.034389 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 31 05:39:10 crc kubenswrapper[4832]: I0131 05:39:10.115674 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 31 05:39:10 crc kubenswrapper[4832]: I0131 05:39:10.422489 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"cf637281-101a-4e11-93b6-74f55d914798","Type":"ContainerDied","Data":"2d380936f2560191bb960c51e9e64b82105a5097f325e262f5a47620a22e0ee2"} Jan 31 05:39:10 crc kubenswrapper[4832]: I0131 05:39:10.422526 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d380936f2560191bb960c51e9e64b82105a5097f325e262f5a47620a22e0ee2" Jan 31 05:39:10 crc kubenswrapper[4832]: I0131 05:39:10.422593 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.507868 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 05:39:13 crc kubenswrapper[4832]: E0131 05:39:13.508958 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33155cf-f884-4cc1-96d6-e85809f9af7b" containerName="extract-content" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.508980 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33155cf-f884-4cc1-96d6-e85809f9af7b" containerName="extract-content" Jan 31 05:39:13 crc kubenswrapper[4832]: E0131 05:39:13.509001 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf637281-101a-4e11-93b6-74f55d914798" containerName="tempest-tests-tempest-tests-runner" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.509009 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf637281-101a-4e11-93b6-74f55d914798" containerName="tempest-tests-tempest-tests-runner" Jan 31 05:39:13 crc kubenswrapper[4832]: E0131 05:39:13.509023 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33155cf-f884-4cc1-96d6-e85809f9af7b" containerName="registry-server" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.509030 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33155cf-f884-4cc1-96d6-e85809f9af7b" containerName="registry-server" Jan 31 05:39:13 crc kubenswrapper[4832]: E0131 05:39:13.509067 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33155cf-f884-4cc1-96d6-e85809f9af7b" containerName="extract-utilities" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.509075 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33155cf-f884-4cc1-96d6-e85809f9af7b" containerName="extract-utilities" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.509322 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf637281-101a-4e11-93b6-74f55d914798" containerName="tempest-tests-tempest-tests-runner" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.509335 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d33155cf-f884-4cc1-96d6-e85809f9af7b" containerName="registry-server" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.510042 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.513210 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-schrn" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.516303 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.579918 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9b96ff1a-9380-4bdc-a490-bdfa6e760792\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.580126 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpnr7\" (UniqueName: \"kubernetes.io/projected/9b96ff1a-9380-4bdc-a490-bdfa6e760792-kube-api-access-kpnr7\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9b96ff1a-9380-4bdc-a490-bdfa6e760792\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.682271 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9b96ff1a-9380-4bdc-a490-bdfa6e760792\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.682343 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpnr7\" (UniqueName: \"kubernetes.io/projected/9b96ff1a-9380-4bdc-a490-bdfa6e760792-kube-api-access-kpnr7\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9b96ff1a-9380-4bdc-a490-bdfa6e760792\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.682801 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9b96ff1a-9380-4bdc-a490-bdfa6e760792\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.704475 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpnr7\" (UniqueName: \"kubernetes.io/projected/9b96ff1a-9380-4bdc-a490-bdfa6e760792-kube-api-access-kpnr7\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9b96ff1a-9380-4bdc-a490-bdfa6e760792\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.705815 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9b96ff1a-9380-4bdc-a490-bdfa6e760792\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:39:13 crc kubenswrapper[4832]: I0131 05:39:13.834448 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 05:39:14 crc kubenswrapper[4832]: I0131 05:39:14.270609 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 05:39:14 crc kubenswrapper[4832]: I0131 05:39:14.460819 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9b96ff1a-9380-4bdc-a490-bdfa6e760792","Type":"ContainerStarted","Data":"273288c325164e212ded9f88bb8eab67c3a60b35425b04cc6e0c0591ad4ba3f4"} Jan 31 05:39:15 crc kubenswrapper[4832]: I0131 05:39:15.475104 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9b96ff1a-9380-4bdc-a490-bdfa6e760792","Type":"ContainerStarted","Data":"7152f76aef7b1e5504b50d84157eb437d1a80bf248dcc3b59d822c343c00047c"} Jan 31 05:39:15 crc kubenswrapper[4832]: I0131 05:39:15.506341 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.663801205 podStartE2EDuration="2.506310353s" podCreationTimestamp="2026-01-31 05:39:13 +0000 UTC" firstStartedPulling="2026-01-31 05:39:14.279302547 +0000 UTC m=+3363.228124232" lastFinishedPulling="2026-01-31 05:39:15.121811695 +0000 UTC m=+3364.070633380" observedRunningTime="2026-01-31 05:39:15.49751426 +0000 UTC m=+3364.446335985" watchObservedRunningTime="2026-01-31 05:39:15.506310353 +0000 UTC m=+3364.455132058" Jan 31 05:39:18 crc kubenswrapper[4832]: I0131 05:39:18.539708 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:39:18 crc kubenswrapper[4832]: I0131 05:39:18.540096 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:39:28 crc kubenswrapper[4832]: I0131 05:39:28.765096 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kqmts"] Jan 31 05:39:28 crc kubenswrapper[4832]: I0131 05:39:28.768000 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:39:28 crc kubenswrapper[4832]: I0131 05:39:28.774915 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kqmts"] Jan 31 05:39:28 crc kubenswrapper[4832]: I0131 05:39:28.790977 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-catalog-content\") pod \"redhat-operators-kqmts\" (UID: \"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb\") " pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:39:28 crc kubenswrapper[4832]: I0131 05:39:28.791046 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvqh7\" (UniqueName: \"kubernetes.io/projected/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-kube-api-access-dvqh7\") pod \"redhat-operators-kqmts\" (UID: \"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb\") " pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:39:28 crc kubenswrapper[4832]: I0131 05:39:28.791154 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-utilities\") pod \"redhat-operators-kqmts\" (UID: \"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb\") " pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:39:28 crc kubenswrapper[4832]: I0131 05:39:28.892766 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-catalog-content\") pod \"redhat-operators-kqmts\" (UID: \"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb\") " pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:39:28 crc kubenswrapper[4832]: I0131 05:39:28.892839 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvqh7\" (UniqueName: \"kubernetes.io/projected/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-kube-api-access-dvqh7\") pod \"redhat-operators-kqmts\" (UID: \"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb\") " pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:39:28 crc kubenswrapper[4832]: I0131 05:39:28.892924 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-utilities\") pod \"redhat-operators-kqmts\" (UID: \"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb\") " pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:39:28 crc kubenswrapper[4832]: I0131 05:39:28.893495 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-utilities\") pod \"redhat-operators-kqmts\" (UID: \"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb\") " pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:39:28 crc kubenswrapper[4832]: I0131 05:39:28.893592 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-catalog-content\") pod \"redhat-operators-kqmts\" (UID: \"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb\") " pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:39:28 crc kubenswrapper[4832]: I0131 05:39:28.918055 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvqh7\" (UniqueName: \"kubernetes.io/projected/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-kube-api-access-dvqh7\") pod \"redhat-operators-kqmts\" (UID: \"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb\") " pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:39:29 crc kubenswrapper[4832]: I0131 05:39:29.092273 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:39:29 crc kubenswrapper[4832]: W0131 05:39:29.575415 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod137a77e6_6ae4_4ac1_8a0d_c2367e97e8cb.slice/crio-8ea15a1de1d19d0d3051de77bd6c0d210f37cb3629dcca4f94fd36f9ea0f06bb WatchSource:0}: Error finding container 8ea15a1de1d19d0d3051de77bd6c0d210f37cb3629dcca4f94fd36f9ea0f06bb: Status 404 returned error can't find the container with id 8ea15a1de1d19d0d3051de77bd6c0d210f37cb3629dcca4f94fd36f9ea0f06bb Jan 31 05:39:29 crc kubenswrapper[4832]: I0131 05:39:29.590556 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqmts" event={"ID":"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb","Type":"ContainerStarted","Data":"8ea15a1de1d19d0d3051de77bd6c0d210f37cb3629dcca4f94fd36f9ea0f06bb"} Jan 31 05:39:29 crc kubenswrapper[4832]: I0131 05:39:29.591134 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kqmts"] Jan 31 05:39:30 crc kubenswrapper[4832]: I0131 05:39:30.599948 4832 generic.go:334] "Generic (PLEG): container finished" podID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerID="25685f5087d53c0d66023025a713c181c90cc09e8662de48167132739b879fb5" exitCode=0 Jan 31 05:39:30 crc kubenswrapper[4832]: I0131 05:39:30.600037 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqmts" event={"ID":"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb","Type":"ContainerDied","Data":"25685f5087d53c0d66023025a713c181c90cc09e8662de48167132739b879fb5"} Jan 31 05:39:31 crc kubenswrapper[4832]: I0131 05:39:31.616691 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqmts" event={"ID":"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb","Type":"ContainerStarted","Data":"d0aa2b022eeede07e57f26fae67f121fff1e1db6675c6eeb6f694f27a4e82920"} Jan 31 05:39:32 crc kubenswrapper[4832]: I0131 05:39:32.625847 4832 generic.go:334] "Generic (PLEG): container finished" podID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerID="d0aa2b022eeede07e57f26fae67f121fff1e1db6675c6eeb6f694f27a4e82920" exitCode=0 Jan 31 05:39:32 crc kubenswrapper[4832]: I0131 05:39:32.625945 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqmts" event={"ID":"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb","Type":"ContainerDied","Data":"d0aa2b022eeede07e57f26fae67f121fff1e1db6675c6eeb6f694f27a4e82920"} Jan 31 05:39:33 crc kubenswrapper[4832]: I0131 05:39:33.635485 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqmts" event={"ID":"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb","Type":"ContainerStarted","Data":"bf74e347fc05d44ad77bb83ba31aef19050e577f92403546c6d52bff23d3a0bf"} Jan 31 05:39:33 crc kubenswrapper[4832]: I0131 05:39:33.656112 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kqmts" podStartSLOduration=3.025832587 podStartE2EDuration="5.656087061s" podCreationTimestamp="2026-01-31 05:39:28 +0000 UTC" firstStartedPulling="2026-01-31 05:39:30.603290282 +0000 UTC m=+3379.552111968" lastFinishedPulling="2026-01-31 05:39:33.233544757 +0000 UTC m=+3382.182366442" observedRunningTime="2026-01-31 05:39:33.652792519 +0000 UTC m=+3382.601614214" watchObservedRunningTime="2026-01-31 05:39:33.656087061 +0000 UTC m=+3382.604908756" Jan 31 05:39:37 crc kubenswrapper[4832]: I0131 05:39:37.109985 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2qhxc/must-gather-4bcjm"] Jan 31 05:39:37 crc kubenswrapper[4832]: I0131 05:39:37.112515 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/must-gather-4bcjm" Jan 31 05:39:37 crc kubenswrapper[4832]: I0131 05:39:37.115702 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2qhxc"/"default-dockercfg-p6xbd" Jan 31 05:39:37 crc kubenswrapper[4832]: I0131 05:39:37.116078 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2qhxc"/"openshift-service-ca.crt" Jan 31 05:39:37 crc kubenswrapper[4832]: I0131 05:39:37.116345 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2qhxc"/"kube-root-ca.crt" Jan 31 05:39:37 crc kubenswrapper[4832]: I0131 05:39:37.139299 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2qhxc/must-gather-4bcjm"] Jan 31 05:39:37 crc kubenswrapper[4832]: I0131 05:39:37.276222 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp7dh\" (UniqueName: \"kubernetes.io/projected/312a101b-8f97-4394-b026-07e3ec046016-kube-api-access-xp7dh\") pod \"must-gather-4bcjm\" (UID: \"312a101b-8f97-4394-b026-07e3ec046016\") " pod="openshift-must-gather-2qhxc/must-gather-4bcjm" Jan 31 05:39:37 crc kubenswrapper[4832]: I0131 05:39:37.276416 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/312a101b-8f97-4394-b026-07e3ec046016-must-gather-output\") pod \"must-gather-4bcjm\" (UID: \"312a101b-8f97-4394-b026-07e3ec046016\") " pod="openshift-must-gather-2qhxc/must-gather-4bcjm" Jan 31 05:39:37 crc kubenswrapper[4832]: I0131 05:39:37.378278 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp7dh\" (UniqueName: \"kubernetes.io/projected/312a101b-8f97-4394-b026-07e3ec046016-kube-api-access-xp7dh\") pod \"must-gather-4bcjm\" (UID: \"312a101b-8f97-4394-b026-07e3ec046016\") " pod="openshift-must-gather-2qhxc/must-gather-4bcjm" Jan 31 05:39:37 crc kubenswrapper[4832]: I0131 05:39:37.378445 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/312a101b-8f97-4394-b026-07e3ec046016-must-gather-output\") pod \"must-gather-4bcjm\" (UID: \"312a101b-8f97-4394-b026-07e3ec046016\") " pod="openshift-must-gather-2qhxc/must-gather-4bcjm" Jan 31 05:39:37 crc kubenswrapper[4832]: I0131 05:39:37.378909 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/312a101b-8f97-4394-b026-07e3ec046016-must-gather-output\") pod \"must-gather-4bcjm\" (UID: \"312a101b-8f97-4394-b026-07e3ec046016\") " pod="openshift-must-gather-2qhxc/must-gather-4bcjm" Jan 31 05:39:37 crc kubenswrapper[4832]: I0131 05:39:37.413228 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp7dh\" (UniqueName: \"kubernetes.io/projected/312a101b-8f97-4394-b026-07e3ec046016-kube-api-access-xp7dh\") pod \"must-gather-4bcjm\" (UID: \"312a101b-8f97-4394-b026-07e3ec046016\") " pod="openshift-must-gather-2qhxc/must-gather-4bcjm" Jan 31 05:39:37 crc kubenswrapper[4832]: I0131 05:39:37.432774 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/must-gather-4bcjm" Jan 31 05:39:37 crc kubenswrapper[4832]: I0131 05:39:37.965132 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2qhxc/must-gather-4bcjm"] Jan 31 05:39:38 crc kubenswrapper[4832]: I0131 05:39:38.686433 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2qhxc/must-gather-4bcjm" event={"ID":"312a101b-8f97-4394-b026-07e3ec046016","Type":"ContainerStarted","Data":"971b639f19f3d0595d2286af2907d9bddbf907c38a58526c4175fc7f6a5ad0e1"} Jan 31 05:39:39 crc kubenswrapper[4832]: I0131 05:39:39.093107 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:39:39 crc kubenswrapper[4832]: I0131 05:39:39.093450 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:39:40 crc kubenswrapper[4832]: I0131 05:39:40.168069 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kqmts" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="registry-server" probeResult="failure" output=< Jan 31 05:39:40 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 31 05:39:40 crc kubenswrapper[4832]: > Jan 31 05:39:43 crc kubenswrapper[4832]: I0131 05:39:43.743904 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2qhxc/must-gather-4bcjm" event={"ID":"312a101b-8f97-4394-b026-07e3ec046016","Type":"ContainerStarted","Data":"0e72b1e158f8cc6712f7e26c5582c0077e09c9a6eb4f0f6ef87eabab1063274c"} Jan 31 05:39:43 crc kubenswrapper[4832]: I0131 05:39:43.744322 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2qhxc/must-gather-4bcjm" event={"ID":"312a101b-8f97-4394-b026-07e3ec046016","Type":"ContainerStarted","Data":"7b317c28179170c6910e0d9db2acd1d0dc4b3211735945986703ee1f7c0e9014"} Jan 31 05:39:43 crc kubenswrapper[4832]: I0131 05:39:43.763637 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2qhxc/must-gather-4bcjm" podStartSLOduration=1.636260247 podStartE2EDuration="6.763612796s" podCreationTimestamp="2026-01-31 05:39:37 +0000 UTC" firstStartedPulling="2026-01-31 05:39:37.968989418 +0000 UTC m=+3386.917811103" lastFinishedPulling="2026-01-31 05:39:43.096341967 +0000 UTC m=+3392.045163652" observedRunningTime="2026-01-31 05:39:43.762583363 +0000 UTC m=+3392.711405078" watchObservedRunningTime="2026-01-31 05:39:43.763612796 +0000 UTC m=+3392.712434491" Jan 31 05:39:47 crc kubenswrapper[4832]: I0131 05:39:47.179939 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2qhxc/crc-debug-c6ct8"] Jan 31 05:39:47 crc kubenswrapper[4832]: I0131 05:39:47.184493 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/crc-debug-c6ct8" Jan 31 05:39:47 crc kubenswrapper[4832]: I0131 05:39:47.281448 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfh2v\" (UniqueName: \"kubernetes.io/projected/85c3c738-7042-4478-81d4-faff5ed75c0b-kube-api-access-pfh2v\") pod \"crc-debug-c6ct8\" (UID: \"85c3c738-7042-4478-81d4-faff5ed75c0b\") " pod="openshift-must-gather-2qhxc/crc-debug-c6ct8" Jan 31 05:39:47 crc kubenswrapper[4832]: I0131 05:39:47.281510 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85c3c738-7042-4478-81d4-faff5ed75c0b-host\") pod \"crc-debug-c6ct8\" (UID: \"85c3c738-7042-4478-81d4-faff5ed75c0b\") " pod="openshift-must-gather-2qhxc/crc-debug-c6ct8" Jan 31 05:39:47 crc kubenswrapper[4832]: I0131 05:39:47.382822 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfh2v\" (UniqueName: \"kubernetes.io/projected/85c3c738-7042-4478-81d4-faff5ed75c0b-kube-api-access-pfh2v\") pod \"crc-debug-c6ct8\" (UID: \"85c3c738-7042-4478-81d4-faff5ed75c0b\") " pod="openshift-must-gather-2qhxc/crc-debug-c6ct8" Jan 31 05:39:47 crc kubenswrapper[4832]: I0131 05:39:47.382882 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85c3c738-7042-4478-81d4-faff5ed75c0b-host\") pod \"crc-debug-c6ct8\" (UID: \"85c3c738-7042-4478-81d4-faff5ed75c0b\") " pod="openshift-must-gather-2qhxc/crc-debug-c6ct8" Jan 31 05:39:47 crc kubenswrapper[4832]: I0131 05:39:47.383088 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85c3c738-7042-4478-81d4-faff5ed75c0b-host\") pod \"crc-debug-c6ct8\" (UID: \"85c3c738-7042-4478-81d4-faff5ed75c0b\") " pod="openshift-must-gather-2qhxc/crc-debug-c6ct8" Jan 31 05:39:47 crc kubenswrapper[4832]: I0131 05:39:47.402407 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfh2v\" (UniqueName: \"kubernetes.io/projected/85c3c738-7042-4478-81d4-faff5ed75c0b-kube-api-access-pfh2v\") pod \"crc-debug-c6ct8\" (UID: \"85c3c738-7042-4478-81d4-faff5ed75c0b\") " pod="openshift-must-gather-2qhxc/crc-debug-c6ct8" Jan 31 05:39:47 crc kubenswrapper[4832]: I0131 05:39:47.504340 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/crc-debug-c6ct8" Jan 31 05:39:47 crc kubenswrapper[4832]: I0131 05:39:47.774349 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2qhxc/crc-debug-c6ct8" event={"ID":"85c3c738-7042-4478-81d4-faff5ed75c0b","Type":"ContainerStarted","Data":"a1a1d5bb528d058ccca7232f831dac18a67143db02fa10d794b084dde6f58f76"} Jan 31 05:39:48 crc kubenswrapper[4832]: I0131 05:39:48.539655 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:39:48 crc kubenswrapper[4832]: I0131 05:39:48.540162 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:39:50 crc kubenswrapper[4832]: I0131 05:39:50.167250 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kqmts" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="registry-server" probeResult="failure" output=< Jan 31 05:39:50 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 31 05:39:50 crc kubenswrapper[4832]: > Jan 31 05:40:00 crc kubenswrapper[4832]: I0131 05:40:00.154271 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kqmts" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="registry-server" probeResult="failure" output=< Jan 31 05:40:00 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 31 05:40:00 crc kubenswrapper[4832]: > Jan 31 05:40:01 crc kubenswrapper[4832]: I0131 05:40:01.911731 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2qhxc/crc-debug-c6ct8" event={"ID":"85c3c738-7042-4478-81d4-faff5ed75c0b","Type":"ContainerStarted","Data":"954737604af00c89f8c7674a3ec9efe443ca4f472a4b604821011396612f7ad4"} Jan 31 05:40:01 crc kubenswrapper[4832]: I0131 05:40:01.930606 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2qhxc/crc-debug-c6ct8" podStartSLOduration=1.584039634 podStartE2EDuration="14.930587098s" podCreationTimestamp="2026-01-31 05:39:47 +0000 UTC" firstStartedPulling="2026-01-31 05:39:47.552528848 +0000 UTC m=+3396.501350533" lastFinishedPulling="2026-01-31 05:40:00.899076312 +0000 UTC m=+3409.847897997" observedRunningTime="2026-01-31 05:40:01.924909491 +0000 UTC m=+3410.873731186" watchObservedRunningTime="2026-01-31 05:40:01.930587098 +0000 UTC m=+3410.879408783" Jan 31 05:40:10 crc kubenswrapper[4832]: I0131 05:40:10.174088 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kqmts" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="registry-server" probeResult="failure" output=< Jan 31 05:40:10 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 31 05:40:10 crc kubenswrapper[4832]: > Jan 31 05:40:18 crc kubenswrapper[4832]: I0131 05:40:18.539971 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:40:18 crc kubenswrapper[4832]: I0131 05:40:18.540643 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:40:18 crc kubenswrapper[4832]: I0131 05:40:18.540685 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 05:40:18 crc kubenswrapper[4832]: I0131 05:40:18.541233 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5f05e052d2c0ca776347fca83a19c093bdcddf6eb7b337619933b8b58eba1c6"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:40:18 crc kubenswrapper[4832]: I0131 05:40:18.541288 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://f5f05e052d2c0ca776347fca83a19c093bdcddf6eb7b337619933b8b58eba1c6" gracePeriod=600 Jan 31 05:40:19 crc kubenswrapper[4832]: I0131 05:40:19.085766 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="f5f05e052d2c0ca776347fca83a19c093bdcddf6eb7b337619933b8b58eba1c6" exitCode=0 Jan 31 05:40:19 crc kubenswrapper[4832]: I0131 05:40:19.085815 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"f5f05e052d2c0ca776347fca83a19c093bdcddf6eb7b337619933b8b58eba1c6"} Jan 31 05:40:19 crc kubenswrapper[4832]: I0131 05:40:19.085856 4832 scope.go:117] "RemoveContainer" containerID="f30019bbdcaef060cbc0a6d6d47b1dd6fe6447f7cb16568cf82a6d90bf4b90c5" Jan 31 05:40:20 crc kubenswrapper[4832]: I0131 05:40:20.095229 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522"} Jan 31 05:40:20 crc kubenswrapper[4832]: I0131 05:40:20.145959 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kqmts" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="registry-server" probeResult="failure" output=< Jan 31 05:40:20 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 31 05:40:20 crc kubenswrapper[4832]: > Jan 31 05:40:30 crc kubenswrapper[4832]: I0131 05:40:30.148779 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kqmts" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="registry-server" probeResult="failure" output=< Jan 31 05:40:30 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 31 05:40:30 crc kubenswrapper[4832]: > Jan 31 05:40:40 crc kubenswrapper[4832]: I0131 05:40:40.144482 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kqmts" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="registry-server" probeResult="failure" output=< Jan 31 05:40:40 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 31 05:40:40 crc kubenswrapper[4832]: > Jan 31 05:40:50 crc kubenswrapper[4832]: I0131 05:40:50.144453 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kqmts" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="registry-server" probeResult="failure" output=< Jan 31 05:40:50 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 31 05:40:50 crc kubenswrapper[4832]: > Jan 31 05:40:51 crc kubenswrapper[4832]: I0131 05:40:51.371942 4832 generic.go:334] "Generic (PLEG): container finished" podID="85c3c738-7042-4478-81d4-faff5ed75c0b" containerID="954737604af00c89f8c7674a3ec9efe443ca4f472a4b604821011396612f7ad4" exitCode=0 Jan 31 05:40:51 crc kubenswrapper[4832]: I0131 05:40:51.371996 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2qhxc/crc-debug-c6ct8" event={"ID":"85c3c738-7042-4478-81d4-faff5ed75c0b","Type":"ContainerDied","Data":"954737604af00c89f8c7674a3ec9efe443ca4f472a4b604821011396612f7ad4"} Jan 31 05:40:52 crc kubenswrapper[4832]: I0131 05:40:52.511226 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/crc-debug-c6ct8" Jan 31 05:40:52 crc kubenswrapper[4832]: I0131 05:40:52.546498 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2qhxc/crc-debug-c6ct8"] Jan 31 05:40:52 crc kubenswrapper[4832]: I0131 05:40:52.554109 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2qhxc/crc-debug-c6ct8"] Jan 31 05:40:52 crc kubenswrapper[4832]: I0131 05:40:52.621297 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85c3c738-7042-4478-81d4-faff5ed75c0b-host\") pod \"85c3c738-7042-4478-81d4-faff5ed75c0b\" (UID: \"85c3c738-7042-4478-81d4-faff5ed75c0b\") " Jan 31 05:40:52 crc kubenswrapper[4832]: I0131 05:40:52.621378 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85c3c738-7042-4478-81d4-faff5ed75c0b-host" (OuterVolumeSpecName: "host") pod "85c3c738-7042-4478-81d4-faff5ed75c0b" (UID: "85c3c738-7042-4478-81d4-faff5ed75c0b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:40:52 crc kubenswrapper[4832]: I0131 05:40:52.621427 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfh2v\" (UniqueName: \"kubernetes.io/projected/85c3c738-7042-4478-81d4-faff5ed75c0b-kube-api-access-pfh2v\") pod \"85c3c738-7042-4478-81d4-faff5ed75c0b\" (UID: \"85c3c738-7042-4478-81d4-faff5ed75c0b\") " Jan 31 05:40:52 crc kubenswrapper[4832]: I0131 05:40:52.621920 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85c3c738-7042-4478-81d4-faff5ed75c0b-host\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:52 crc kubenswrapper[4832]: I0131 05:40:52.627175 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85c3c738-7042-4478-81d4-faff5ed75c0b-kube-api-access-pfh2v" (OuterVolumeSpecName: "kube-api-access-pfh2v") pod "85c3c738-7042-4478-81d4-faff5ed75c0b" (UID: "85c3c738-7042-4478-81d4-faff5ed75c0b"). InnerVolumeSpecName "kube-api-access-pfh2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:52 crc kubenswrapper[4832]: I0131 05:40:52.723948 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfh2v\" (UniqueName: \"kubernetes.io/projected/85c3c738-7042-4478-81d4-faff5ed75c0b-kube-api-access-pfh2v\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:53 crc kubenswrapper[4832]: I0131 05:40:53.391043 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1a1d5bb528d058ccca7232f831dac18a67143db02fa10d794b084dde6f58f76" Jan 31 05:40:53 crc kubenswrapper[4832]: I0131 05:40:53.391080 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/crc-debug-c6ct8" Jan 31 05:40:53 crc kubenswrapper[4832]: I0131 05:40:53.737543 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2qhxc/crc-debug-hld6v"] Jan 31 05:40:53 crc kubenswrapper[4832]: E0131 05:40:53.738070 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c3c738-7042-4478-81d4-faff5ed75c0b" containerName="container-00" Jan 31 05:40:53 crc kubenswrapper[4832]: I0131 05:40:53.738083 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c3c738-7042-4478-81d4-faff5ed75c0b" containerName="container-00" Jan 31 05:40:53 crc kubenswrapper[4832]: I0131 05:40:53.738267 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="85c3c738-7042-4478-81d4-faff5ed75c0b" containerName="container-00" Jan 31 05:40:53 crc kubenswrapper[4832]: I0131 05:40:53.738845 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/crc-debug-hld6v" Jan 31 05:40:53 crc kubenswrapper[4832]: I0131 05:40:53.851309 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d1b90507-7bba-488b-ba93-5204de174f91-host\") pod \"crc-debug-hld6v\" (UID: \"d1b90507-7bba-488b-ba93-5204de174f91\") " pod="openshift-must-gather-2qhxc/crc-debug-hld6v" Jan 31 05:40:53 crc kubenswrapper[4832]: I0131 05:40:53.851893 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lpg7\" (UniqueName: \"kubernetes.io/projected/d1b90507-7bba-488b-ba93-5204de174f91-kube-api-access-4lpg7\") pod \"crc-debug-hld6v\" (UID: \"d1b90507-7bba-488b-ba93-5204de174f91\") " pod="openshift-must-gather-2qhxc/crc-debug-hld6v" Jan 31 05:40:53 crc kubenswrapper[4832]: I0131 05:40:53.871741 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85c3c738-7042-4478-81d4-faff5ed75c0b" path="/var/lib/kubelet/pods/85c3c738-7042-4478-81d4-faff5ed75c0b/volumes" Jan 31 05:40:53 crc kubenswrapper[4832]: I0131 05:40:53.954865 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d1b90507-7bba-488b-ba93-5204de174f91-host\") pod \"crc-debug-hld6v\" (UID: \"d1b90507-7bba-488b-ba93-5204de174f91\") " pod="openshift-must-gather-2qhxc/crc-debug-hld6v" Jan 31 05:40:53 crc kubenswrapper[4832]: I0131 05:40:53.954963 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d1b90507-7bba-488b-ba93-5204de174f91-host\") pod \"crc-debug-hld6v\" (UID: \"d1b90507-7bba-488b-ba93-5204de174f91\") " pod="openshift-must-gather-2qhxc/crc-debug-hld6v" Jan 31 05:40:53 crc kubenswrapper[4832]: I0131 05:40:53.955164 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lpg7\" (UniqueName: \"kubernetes.io/projected/d1b90507-7bba-488b-ba93-5204de174f91-kube-api-access-4lpg7\") pod \"crc-debug-hld6v\" (UID: \"d1b90507-7bba-488b-ba93-5204de174f91\") " pod="openshift-must-gather-2qhxc/crc-debug-hld6v" Jan 31 05:40:53 crc kubenswrapper[4832]: I0131 05:40:53.983038 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lpg7\" (UniqueName: \"kubernetes.io/projected/d1b90507-7bba-488b-ba93-5204de174f91-kube-api-access-4lpg7\") pod \"crc-debug-hld6v\" (UID: \"d1b90507-7bba-488b-ba93-5204de174f91\") " pod="openshift-must-gather-2qhxc/crc-debug-hld6v" Jan 31 05:40:54 crc kubenswrapper[4832]: I0131 05:40:54.073252 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/crc-debug-hld6v" Jan 31 05:40:54 crc kubenswrapper[4832]: I0131 05:40:54.400518 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2qhxc/crc-debug-hld6v" event={"ID":"d1b90507-7bba-488b-ba93-5204de174f91","Type":"ContainerStarted","Data":"36e28c76baf5fbf563d710bfcad2eac01e7dfa443525df87043c85a77f807ece"} Jan 31 05:40:55 crc kubenswrapper[4832]: I0131 05:40:55.410249 4832 generic.go:334] "Generic (PLEG): container finished" podID="d1b90507-7bba-488b-ba93-5204de174f91" containerID="11ef22b418c5cbfec238a10ce5119615149fed210eac56ba466c24aab9e228e3" exitCode=0 Jan 31 05:40:55 crc kubenswrapper[4832]: I0131 05:40:55.410292 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2qhxc/crc-debug-hld6v" event={"ID":"d1b90507-7bba-488b-ba93-5204de174f91","Type":"ContainerDied","Data":"11ef22b418c5cbfec238a10ce5119615149fed210eac56ba466c24aab9e228e3"} Jan 31 05:40:56 crc kubenswrapper[4832]: I0131 05:40:56.010816 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2qhxc/crc-debug-hld6v"] Jan 31 05:40:56 crc kubenswrapper[4832]: I0131 05:40:56.022864 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2qhxc/crc-debug-hld6v"] Jan 31 05:40:56 crc kubenswrapper[4832]: I0131 05:40:56.517392 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/crc-debug-hld6v" Jan 31 05:40:56 crc kubenswrapper[4832]: I0131 05:40:56.604633 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lpg7\" (UniqueName: \"kubernetes.io/projected/d1b90507-7bba-488b-ba93-5204de174f91-kube-api-access-4lpg7\") pod \"d1b90507-7bba-488b-ba93-5204de174f91\" (UID: \"d1b90507-7bba-488b-ba93-5204de174f91\") " Jan 31 05:40:56 crc kubenswrapper[4832]: I0131 05:40:56.604778 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d1b90507-7bba-488b-ba93-5204de174f91-host\") pod \"d1b90507-7bba-488b-ba93-5204de174f91\" (UID: \"d1b90507-7bba-488b-ba93-5204de174f91\") " Jan 31 05:40:56 crc kubenswrapper[4832]: I0131 05:40:56.604828 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1b90507-7bba-488b-ba93-5204de174f91-host" (OuterVolumeSpecName: "host") pod "d1b90507-7bba-488b-ba93-5204de174f91" (UID: "d1b90507-7bba-488b-ba93-5204de174f91"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:40:56 crc kubenswrapper[4832]: I0131 05:40:56.605272 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d1b90507-7bba-488b-ba93-5204de174f91-host\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:56 crc kubenswrapper[4832]: I0131 05:40:56.613898 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b90507-7bba-488b-ba93-5204de174f91-kube-api-access-4lpg7" (OuterVolumeSpecName: "kube-api-access-4lpg7") pod "d1b90507-7bba-488b-ba93-5204de174f91" (UID: "d1b90507-7bba-488b-ba93-5204de174f91"). InnerVolumeSpecName "kube-api-access-4lpg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:56 crc kubenswrapper[4832]: I0131 05:40:56.706897 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lpg7\" (UniqueName: \"kubernetes.io/projected/d1b90507-7bba-488b-ba93-5204de174f91-kube-api-access-4lpg7\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:57 crc kubenswrapper[4832]: I0131 05:40:57.192918 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2qhxc/crc-debug-dwvrt"] Jan 31 05:40:57 crc kubenswrapper[4832]: E0131 05:40:57.193585 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b90507-7bba-488b-ba93-5204de174f91" containerName="container-00" Jan 31 05:40:57 crc kubenswrapper[4832]: I0131 05:40:57.193606 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b90507-7bba-488b-ba93-5204de174f91" containerName="container-00" Jan 31 05:40:57 crc kubenswrapper[4832]: I0131 05:40:57.193846 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b90507-7bba-488b-ba93-5204de174f91" containerName="container-00" Jan 31 05:40:57 crc kubenswrapper[4832]: I0131 05:40:57.194455 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/crc-debug-dwvrt" Jan 31 05:40:57 crc kubenswrapper[4832]: I0131 05:40:57.318299 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b132c923-5869-4011-880a-5b2a04928bac-host\") pod \"crc-debug-dwvrt\" (UID: \"b132c923-5869-4011-880a-5b2a04928bac\") " pod="openshift-must-gather-2qhxc/crc-debug-dwvrt" Jan 31 05:40:57 crc kubenswrapper[4832]: I0131 05:40:57.318364 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6hj6\" (UniqueName: \"kubernetes.io/projected/b132c923-5869-4011-880a-5b2a04928bac-kube-api-access-x6hj6\") pod \"crc-debug-dwvrt\" (UID: \"b132c923-5869-4011-880a-5b2a04928bac\") " pod="openshift-must-gather-2qhxc/crc-debug-dwvrt" Jan 31 05:40:57 crc kubenswrapper[4832]: I0131 05:40:57.419752 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b132c923-5869-4011-880a-5b2a04928bac-host\") pod \"crc-debug-dwvrt\" (UID: \"b132c923-5869-4011-880a-5b2a04928bac\") " pod="openshift-must-gather-2qhxc/crc-debug-dwvrt" Jan 31 05:40:57 crc kubenswrapper[4832]: I0131 05:40:57.419818 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6hj6\" (UniqueName: \"kubernetes.io/projected/b132c923-5869-4011-880a-5b2a04928bac-kube-api-access-x6hj6\") pod \"crc-debug-dwvrt\" (UID: \"b132c923-5869-4011-880a-5b2a04928bac\") " pod="openshift-must-gather-2qhxc/crc-debug-dwvrt" Jan 31 05:40:57 crc kubenswrapper[4832]: I0131 05:40:57.419866 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b132c923-5869-4011-880a-5b2a04928bac-host\") pod \"crc-debug-dwvrt\" (UID: \"b132c923-5869-4011-880a-5b2a04928bac\") " pod="openshift-must-gather-2qhxc/crc-debug-dwvrt" Jan 31 05:40:57 crc kubenswrapper[4832]: I0131 05:40:57.429236 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36e28c76baf5fbf563d710bfcad2eac01e7dfa443525df87043c85a77f807ece" Jan 31 05:40:57 crc kubenswrapper[4832]: I0131 05:40:57.429292 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/crc-debug-hld6v" Jan 31 05:40:57 crc kubenswrapper[4832]: I0131 05:40:57.459037 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6hj6\" (UniqueName: \"kubernetes.io/projected/b132c923-5869-4011-880a-5b2a04928bac-kube-api-access-x6hj6\") pod \"crc-debug-dwvrt\" (UID: \"b132c923-5869-4011-880a-5b2a04928bac\") " pod="openshift-must-gather-2qhxc/crc-debug-dwvrt" Jan 31 05:40:57 crc kubenswrapper[4832]: I0131 05:40:57.515686 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/crc-debug-dwvrt" Jan 31 05:40:57 crc kubenswrapper[4832]: W0131 05:40:57.559519 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb132c923_5869_4011_880a_5b2a04928bac.slice/crio-ecb53bf457a396002c59ab27dd909d1bccc423aa370c40d54e5200d267d13590 WatchSource:0}: Error finding container ecb53bf457a396002c59ab27dd909d1bccc423aa370c40d54e5200d267d13590: Status 404 returned error can't find the container with id ecb53bf457a396002c59ab27dd909d1bccc423aa370c40d54e5200d267d13590 Jan 31 05:40:57 crc kubenswrapper[4832]: I0131 05:40:57.871620 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b90507-7bba-488b-ba93-5204de174f91" path="/var/lib/kubelet/pods/d1b90507-7bba-488b-ba93-5204de174f91/volumes" Jan 31 05:40:58 crc kubenswrapper[4832]: I0131 05:40:58.440867 4832 generic.go:334] "Generic (PLEG): container finished" podID="b132c923-5869-4011-880a-5b2a04928bac" containerID="128ae2c01b671e69fa7cf162aafebc64bf7e4a5e4fb88c5c29f0ff5806914faf" exitCode=0 Jan 31 05:40:58 crc kubenswrapper[4832]: I0131 05:40:58.440910 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2qhxc/crc-debug-dwvrt" event={"ID":"b132c923-5869-4011-880a-5b2a04928bac","Type":"ContainerDied","Data":"128ae2c01b671e69fa7cf162aafebc64bf7e4a5e4fb88c5c29f0ff5806914faf"} Jan 31 05:40:58 crc kubenswrapper[4832]: I0131 05:40:58.440940 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2qhxc/crc-debug-dwvrt" event={"ID":"b132c923-5869-4011-880a-5b2a04928bac","Type":"ContainerStarted","Data":"ecb53bf457a396002c59ab27dd909d1bccc423aa370c40d54e5200d267d13590"} Jan 31 05:40:58 crc kubenswrapper[4832]: I0131 05:40:58.486585 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2qhxc/crc-debug-dwvrt"] Jan 31 05:40:58 crc kubenswrapper[4832]: I0131 05:40:58.494453 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2qhxc/crc-debug-dwvrt"] Jan 31 05:40:59 crc kubenswrapper[4832]: I0131 05:40:59.165067 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:40:59 crc kubenswrapper[4832]: I0131 05:40:59.228147 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:40:59 crc kubenswrapper[4832]: I0131 05:40:59.568287 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/crc-debug-dwvrt" Jan 31 05:40:59 crc kubenswrapper[4832]: I0131 05:40:59.664814 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6hj6\" (UniqueName: \"kubernetes.io/projected/b132c923-5869-4011-880a-5b2a04928bac-kube-api-access-x6hj6\") pod \"b132c923-5869-4011-880a-5b2a04928bac\" (UID: \"b132c923-5869-4011-880a-5b2a04928bac\") " Jan 31 05:40:59 crc kubenswrapper[4832]: I0131 05:40:59.665032 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b132c923-5869-4011-880a-5b2a04928bac-host\") pod \"b132c923-5869-4011-880a-5b2a04928bac\" (UID: \"b132c923-5869-4011-880a-5b2a04928bac\") " Jan 31 05:40:59 crc kubenswrapper[4832]: I0131 05:40:59.665239 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b132c923-5869-4011-880a-5b2a04928bac-host" (OuterVolumeSpecName: "host") pod "b132c923-5869-4011-880a-5b2a04928bac" (UID: "b132c923-5869-4011-880a-5b2a04928bac"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:40:59 crc kubenswrapper[4832]: I0131 05:40:59.666450 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b132c923-5869-4011-880a-5b2a04928bac-host\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:59 crc kubenswrapper[4832]: I0131 05:40:59.670054 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b132c923-5869-4011-880a-5b2a04928bac-kube-api-access-x6hj6" (OuterVolumeSpecName: "kube-api-access-x6hj6") pod "b132c923-5869-4011-880a-5b2a04928bac" (UID: "b132c923-5869-4011-880a-5b2a04928bac"). InnerVolumeSpecName "kube-api-access-x6hj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:40:59 crc kubenswrapper[4832]: I0131 05:40:59.768794 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6hj6\" (UniqueName: \"kubernetes.io/projected/b132c923-5869-4011-880a-5b2a04928bac-kube-api-access-x6hj6\") on node \"crc\" DevicePath \"\"" Jan 31 05:40:59 crc kubenswrapper[4832]: I0131 05:40:59.871637 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b132c923-5869-4011-880a-5b2a04928bac" path="/var/lib/kubelet/pods/b132c923-5869-4011-880a-5b2a04928bac/volumes" Jan 31 05:41:00 crc kubenswrapper[4832]: I0131 05:41:00.006774 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kqmts"] Jan 31 05:41:00 crc kubenswrapper[4832]: I0131 05:41:00.460172 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/crc-debug-dwvrt" Jan 31 05:41:00 crc kubenswrapper[4832]: I0131 05:41:00.460529 4832 scope.go:117] "RemoveContainer" containerID="128ae2c01b671e69fa7cf162aafebc64bf7e4a5e4fb88c5c29f0ff5806914faf" Jan 31 05:41:00 crc kubenswrapper[4832]: I0131 05:41:00.460279 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kqmts" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="registry-server" containerID="cri-o://bf74e347fc05d44ad77bb83ba31aef19050e577f92403546c6d52bff23d3a0bf" gracePeriod=2 Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.116770 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.305705 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-utilities\") pod \"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb\" (UID: \"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb\") " Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.305819 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvqh7\" (UniqueName: \"kubernetes.io/projected/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-kube-api-access-dvqh7\") pod \"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb\" (UID: \"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb\") " Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.306031 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-catalog-content\") pod \"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb\" (UID: \"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb\") " Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.306590 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-utilities" (OuterVolumeSpecName: "utilities") pod "137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" (UID: "137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.314990 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-kube-api-access-dvqh7" (OuterVolumeSpecName: "kube-api-access-dvqh7") pod "137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" (UID: "137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb"). InnerVolumeSpecName "kube-api-access-dvqh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.408687 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.409017 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvqh7\" (UniqueName: \"kubernetes.io/projected/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-kube-api-access-dvqh7\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.429608 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" (UID: "137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.476613 4832 generic.go:334] "Generic (PLEG): container finished" podID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerID="bf74e347fc05d44ad77bb83ba31aef19050e577f92403546c6d52bff23d3a0bf" exitCode=0 Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.476666 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqmts" event={"ID":"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb","Type":"ContainerDied","Data":"bf74e347fc05d44ad77bb83ba31aef19050e577f92403546c6d52bff23d3a0bf"} Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.476719 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqmts" event={"ID":"137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb","Type":"ContainerDied","Data":"8ea15a1de1d19d0d3051de77bd6c0d210f37cb3629dcca4f94fd36f9ea0f06bb"} Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.476739 4832 scope.go:117] "RemoveContainer" containerID="bf74e347fc05d44ad77bb83ba31aef19050e577f92403546c6d52bff23d3a0bf" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.476733 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqmts" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.510512 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kqmts"] Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.510994 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.511158 4832 scope.go:117] "RemoveContainer" containerID="d0aa2b022eeede07e57f26fae67f121fff1e1db6675c6eeb6f694f27a4e82920" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.520284 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kqmts"] Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.538947 4832 scope.go:117] "RemoveContainer" containerID="25685f5087d53c0d66023025a713c181c90cc09e8662de48167132739b879fb5" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.591501 4832 scope.go:117] "RemoveContainer" containerID="bf74e347fc05d44ad77bb83ba31aef19050e577f92403546c6d52bff23d3a0bf" Jan 31 05:41:01 crc kubenswrapper[4832]: E0131 05:41:01.592478 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf74e347fc05d44ad77bb83ba31aef19050e577f92403546c6d52bff23d3a0bf\": container with ID starting with bf74e347fc05d44ad77bb83ba31aef19050e577f92403546c6d52bff23d3a0bf not found: ID does not exist" containerID="bf74e347fc05d44ad77bb83ba31aef19050e577f92403546c6d52bff23d3a0bf" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.592534 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf74e347fc05d44ad77bb83ba31aef19050e577f92403546c6d52bff23d3a0bf"} err="failed to get container status \"bf74e347fc05d44ad77bb83ba31aef19050e577f92403546c6d52bff23d3a0bf\": rpc error: code = NotFound desc = could not find container \"bf74e347fc05d44ad77bb83ba31aef19050e577f92403546c6d52bff23d3a0bf\": container with ID starting with bf74e347fc05d44ad77bb83ba31aef19050e577f92403546c6d52bff23d3a0bf not found: ID does not exist" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.592592 4832 scope.go:117] "RemoveContainer" containerID="d0aa2b022eeede07e57f26fae67f121fff1e1db6675c6eeb6f694f27a4e82920" Jan 31 05:41:01 crc kubenswrapper[4832]: E0131 05:41:01.593330 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0aa2b022eeede07e57f26fae67f121fff1e1db6675c6eeb6f694f27a4e82920\": container with ID starting with d0aa2b022eeede07e57f26fae67f121fff1e1db6675c6eeb6f694f27a4e82920 not found: ID does not exist" containerID="d0aa2b022eeede07e57f26fae67f121fff1e1db6675c6eeb6f694f27a4e82920" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.593387 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0aa2b022eeede07e57f26fae67f121fff1e1db6675c6eeb6f694f27a4e82920"} err="failed to get container status \"d0aa2b022eeede07e57f26fae67f121fff1e1db6675c6eeb6f694f27a4e82920\": rpc error: code = NotFound desc = could not find container \"d0aa2b022eeede07e57f26fae67f121fff1e1db6675c6eeb6f694f27a4e82920\": container with ID starting with d0aa2b022eeede07e57f26fae67f121fff1e1db6675c6eeb6f694f27a4e82920 not found: ID does not exist" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.593405 4832 scope.go:117] "RemoveContainer" containerID="25685f5087d53c0d66023025a713c181c90cc09e8662de48167132739b879fb5" Jan 31 05:41:01 crc kubenswrapper[4832]: E0131 05:41:01.593662 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25685f5087d53c0d66023025a713c181c90cc09e8662de48167132739b879fb5\": container with ID starting with 25685f5087d53c0d66023025a713c181c90cc09e8662de48167132739b879fb5 not found: ID does not exist" containerID="25685f5087d53c0d66023025a713c181c90cc09e8662de48167132739b879fb5" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.593689 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25685f5087d53c0d66023025a713c181c90cc09e8662de48167132739b879fb5"} err="failed to get container status \"25685f5087d53c0d66023025a713c181c90cc09e8662de48167132739b879fb5\": rpc error: code = NotFound desc = could not find container \"25685f5087d53c0d66023025a713c181c90cc09e8662de48167132739b879fb5\": container with ID starting with 25685f5087d53c0d66023025a713c181c90cc09e8662de48167132739b879fb5 not found: ID does not exist" Jan 31 05:41:01 crc kubenswrapper[4832]: I0131 05:41:01.873806 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" path="/var/lib/kubelet/pods/137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb/volumes" Jan 31 05:41:14 crc kubenswrapper[4832]: I0131 05:41:14.375728 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-797bd69d58-5ff8g_becb2819-84d8-4a62-b98f-75e779ad0f56/barbican-api/0.log" Jan 31 05:41:14 crc kubenswrapper[4832]: I0131 05:41:14.593969 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-797bd69d58-5ff8g_becb2819-84d8-4a62-b98f-75e779ad0f56/barbican-api-log/0.log" Jan 31 05:41:14 crc kubenswrapper[4832]: I0131 05:41:14.657301 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c4c47d5bb-m6mqf_6df8e9f4-654c-449b-b5ce-2fb826d6449c/barbican-keystone-listener/0.log" Jan 31 05:41:14 crc kubenswrapper[4832]: I0131 05:41:14.780867 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c4c47d5bb-m6mqf_6df8e9f4-654c-449b-b5ce-2fb826d6449c/barbican-keystone-listener-log/0.log" Jan 31 05:41:14 crc kubenswrapper[4832]: I0131 05:41:14.856715 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8f9d96795-d8rrf_0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa/barbican-worker-log/0.log" Jan 31 05:41:14 crc kubenswrapper[4832]: I0131 05:41:14.881283 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8f9d96795-d8rrf_0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa/barbican-worker/0.log" Jan 31 05:41:15 crc kubenswrapper[4832]: I0131 05:41:15.037715 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw_27dc3183-5db8-4c94-8247-f5af07376737/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:15 crc kubenswrapper[4832]: I0131 05:41:15.120799 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9305639f-a8a1-4742-b3d9-fe416bcef2cd/ceilometer-central-agent/0.log" Jan 31 05:41:15 crc kubenswrapper[4832]: I0131 05:41:15.231299 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9305639f-a8a1-4742-b3d9-fe416bcef2cd/proxy-httpd/0.log" Jan 31 05:41:15 crc kubenswrapper[4832]: I0131 05:41:15.284673 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9305639f-a8a1-4742-b3d9-fe416bcef2cd/sg-core/0.log" Jan 31 05:41:15 crc kubenswrapper[4832]: I0131 05:41:15.309594 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9305639f-a8a1-4742-b3d9-fe416bcef2cd/ceilometer-notification-agent/0.log" Jan 31 05:41:15 crc kubenswrapper[4832]: I0131 05:41:15.481626 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7bf54b70-e647-47e2-a8fd-1f15cab614a6/cinder-api-log/0.log" Jan 31 05:41:15 crc kubenswrapper[4832]: I0131 05:41:15.490622 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7bf54b70-e647-47e2-a8fd-1f15cab614a6/cinder-api/0.log" Jan 31 05:41:15 crc kubenswrapper[4832]: I0131 05:41:15.635683 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d4b153e2-4087-4707-a751-3b518f670193/cinder-scheduler/0.log" Jan 31 05:41:15 crc kubenswrapper[4832]: I0131 05:41:15.725162 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d4b153e2-4087-4707-a751-3b518f670193/probe/0.log" Jan 31 05:41:15 crc kubenswrapper[4832]: I0131 05:41:15.810781 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4_3b016223-cc19-45ea-9ccb-fc81103e1e5f/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:15 crc kubenswrapper[4832]: I0131 05:41:15.945779 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx_915d4541-b4f7-4a50-ba36-3ed09a631c87/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:16 crc kubenswrapper[4832]: I0131 05:41:16.035380 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-c5l4d_b32a39cb-1499-49b0-8407-b2bfd9c3abbb/init/0.log" Jan 31 05:41:16 crc kubenswrapper[4832]: I0131 05:41:16.246177 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-c5l4d_b32a39cb-1499-49b0-8407-b2bfd9c3abbb/dnsmasq-dns/0.log" Jan 31 05:41:16 crc kubenswrapper[4832]: I0131 05:41:16.252466 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-c5l4d_b32a39cb-1499-49b0-8407-b2bfd9c3abbb/init/0.log" Jan 31 05:41:16 crc kubenswrapper[4832]: I0131 05:41:16.322582 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx_ecaf2da0-d078-4810-9574-05b12bd09288/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:16 crc kubenswrapper[4832]: I0131 05:41:16.472507 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ace9e44a-55e5-48ae-9e2e-533ab30a5cd8/glance-httpd/0.log" Jan 31 05:41:16 crc kubenswrapper[4832]: I0131 05:41:16.515478 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ace9e44a-55e5-48ae-9e2e-533ab30a5cd8/glance-log/0.log" Jan 31 05:41:16 crc kubenswrapper[4832]: I0131 05:41:16.706273 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0720e9f6-21f1-43e9-b075-a35d548f4af9/glance-httpd/0.log" Jan 31 05:41:16 crc kubenswrapper[4832]: I0131 05:41:16.731404 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0720e9f6-21f1-43e9-b075-a35d548f4af9/glance-log/0.log" Jan 31 05:41:16 crc kubenswrapper[4832]: I0131 05:41:16.972828 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f6b9f547b-mrjcq_769ea643-f342-413c-a719-7c65e086b9eb/horizon/0.log" Jan 31 05:41:17 crc kubenswrapper[4832]: I0131 05:41:17.097490 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mzk88_8ab8bc58-9ae3-4284-b959-164da6ebee5e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:17 crc kubenswrapper[4832]: I0131 05:41:17.279472 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-dgr66_23b0f31e-31d6-4f12-91d5-fe078d89dfb7/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:17 crc kubenswrapper[4832]: I0131 05:41:17.281663 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f6b9f547b-mrjcq_769ea643-f342-413c-a719-7c65e086b9eb/horizon-log/0.log" Jan 31 05:41:17 crc kubenswrapper[4832]: I0131 05:41:17.537826 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_bca479b9-47c2-4c05-9b4c-dbde78e18be7/kube-state-metrics/0.log" Jan 31 05:41:17 crc kubenswrapper[4832]: I0131 05:41:17.725707 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79bb65dc58-kdbq7_a8150cab-aaf2-42f5-8148-ffb124e56569/keystone-api/0.log" Jan 31 05:41:17 crc kubenswrapper[4832]: I0131 05:41:17.782367 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4qppj_89932c58-5727-49df-bd91-903acb18f444/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:18 crc kubenswrapper[4832]: I0131 05:41:18.139426 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c7c54d8bf-w9s7x_9192a7c5-49bb-4fed-858e-0c14b96f1288/neutron-api/0.log" Jan 31 05:41:18 crc kubenswrapper[4832]: I0131 05:41:18.325659 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c7c54d8bf-w9s7x_9192a7c5-49bb-4fed-858e-0c14b96f1288/neutron-httpd/0.log" Jan 31 05:41:18 crc kubenswrapper[4832]: I0131 05:41:18.606837 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs_ce3f980d-61a1-4d42-8b56-f7a064c667da/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:19 crc kubenswrapper[4832]: I0131 05:41:19.315073 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3f25cab2-43da-43e5-9cb7-78112bf8ea08/nova-api-log/0.log" Jan 31 05:41:19 crc kubenswrapper[4832]: I0131 05:41:19.374400 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_bbfa7e6d-7200-4b32-9749-d04865e74d5e/nova-cell0-conductor-conductor/0.log" Jan 31 05:41:19 crc kubenswrapper[4832]: I0131 05:41:19.565539 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3f25cab2-43da-43e5-9cb7-78112bf8ea08/nova-api-api/0.log" Jan 31 05:41:19 crc kubenswrapper[4832]: I0131 05:41:19.650511 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d6ec6693-e464-4258-a3fa-fef2b9c97bae/nova-cell1-conductor-conductor/0.log" Jan 31 05:41:19 crc kubenswrapper[4832]: I0131 05:41:19.754506 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_e209d1b6-1bc1-4667-ad99-4b2cf348f2b7/nova-cell1-novncproxy-novncproxy/0.log" Jan 31 05:41:19 crc kubenswrapper[4832]: I0131 05:41:19.875869 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-p9rvr_3b3b6eae-8f54-4057-b9c8-74f27b762ada/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:20 crc kubenswrapper[4832]: I0131 05:41:20.085997 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_83d7f27d-9408-4f6b-ab25-a0f453cc377e/nova-metadata-log/0.log" Jan 31 05:41:20 crc kubenswrapper[4832]: I0131 05:41:20.377079 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e97fe0a4-c76d-439e-a096-460328d1d9d4/nova-scheduler-scheduler/0.log" Jan 31 05:41:20 crc kubenswrapper[4832]: I0131 05:41:20.379107 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c6177d8c-3ae2-4aee-87ac-eefdc96806e6/mysql-bootstrap/0.log" Jan 31 05:41:20 crc kubenswrapper[4832]: I0131 05:41:20.606105 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c6177d8c-3ae2-4aee-87ac-eefdc96806e6/mysql-bootstrap/0.log" Jan 31 05:41:20 crc kubenswrapper[4832]: I0131 05:41:20.624289 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c6177d8c-3ae2-4aee-87ac-eefdc96806e6/galera/0.log" Jan 31 05:41:20 crc kubenswrapper[4832]: I0131 05:41:20.786347 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b9bfe69c-78b0-4982-b9ab-7aa41bd071ec/mysql-bootstrap/0.log" Jan 31 05:41:21 crc kubenswrapper[4832]: I0131 05:41:21.018320 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b9bfe69c-78b0-4982-b9ab-7aa41bd071ec/mysql-bootstrap/0.log" Jan 31 05:41:21 crc kubenswrapper[4832]: I0131 05:41:21.051488 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b9bfe69c-78b0-4982-b9ab-7aa41bd071ec/galera/0.log" Jan 31 05:41:21 crc kubenswrapper[4832]: I0131 05:41:21.212807 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_83d7f27d-9408-4f6b-ab25-a0f453cc377e/nova-metadata-metadata/0.log" Jan 31 05:41:21 crc kubenswrapper[4832]: I0131 05:41:21.284694 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1847eb5f-c952-4d08-8579-786994ad5c56/openstackclient/0.log" Jan 31 05:41:21 crc kubenswrapper[4832]: I0131 05:41:21.359065 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-8sq59_103522f1-37d5-48e1-8004-ab58b154d040/ovn-controller/0.log" Jan 31 05:41:21 crc kubenswrapper[4832]: I0131 05:41:21.507150 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-srkd5_d7e9680d-d2db-4c26-99be-f2e6331d64bf/openstack-network-exporter/0.log" Jan 31 05:41:21 crc kubenswrapper[4832]: I0131 05:41:21.617822 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nmcpt_61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f/ovsdb-server-init/0.log" Jan 31 05:41:21 crc kubenswrapper[4832]: I0131 05:41:21.803208 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nmcpt_61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f/ovsdb-server/0.log" Jan 31 05:41:21 crc kubenswrapper[4832]: I0131 05:41:21.805113 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nmcpt_61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f/ovs-vswitchd/0.log" Jan 31 05:41:21 crc kubenswrapper[4832]: I0131 05:41:21.820263 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nmcpt_61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f/ovsdb-server-init/0.log" Jan 31 05:41:22 crc kubenswrapper[4832]: I0131 05:41:22.035076 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8/openstack-network-exporter/0.log" Jan 31 05:41:22 crc kubenswrapper[4832]: I0131 05:41:22.104544 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8/ovn-northd/0.log" Jan 31 05:41:22 crc kubenswrapper[4832]: I0131 05:41:22.120140 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-pvjvj_70dab5d9-fca1-425f-91e9-42b0013c2e64/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:22 crc kubenswrapper[4832]: I0131 05:41:22.301966 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b05c379c-cf2f-4179-a902-475d2a555294/openstack-network-exporter/0.log" Jan 31 05:41:22 crc kubenswrapper[4832]: I0131 05:41:22.347931 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b05c379c-cf2f-4179-a902-475d2a555294/ovsdbserver-nb/0.log" Jan 31 05:41:22 crc kubenswrapper[4832]: I0131 05:41:22.521611 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e2edd879-2e11-41b2-872a-1f50cf71719f/openstack-network-exporter/0.log" Jan 31 05:41:22 crc kubenswrapper[4832]: I0131 05:41:22.525995 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e2edd879-2e11-41b2-872a-1f50cf71719f/ovsdbserver-sb/0.log" Jan 31 05:41:22 crc kubenswrapper[4832]: I0131 05:41:22.672851 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-548576cf8d-gz7f7_0bfb9b89-7b02-4f5a-b967-d84ad8e20325/placement-api/0.log" Jan 31 05:41:22 crc kubenswrapper[4832]: I0131 05:41:22.793272 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-548576cf8d-gz7f7_0bfb9b89-7b02-4f5a-b967-d84ad8e20325/placement-log/0.log" Jan 31 05:41:22 crc kubenswrapper[4832]: I0131 05:41:22.845309 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f8cc578f-3827-4100-aa82-e6cf59602353/setup-container/0.log" Jan 31 05:41:23 crc kubenswrapper[4832]: I0131 05:41:23.120493 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f59551b3-d149-4bf1-90e2-428e0615f1ce/setup-container/0.log" Jan 31 05:41:23 crc kubenswrapper[4832]: I0131 05:41:23.122310 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f8cc578f-3827-4100-aa82-e6cf59602353/setup-container/0.log" Jan 31 05:41:23 crc kubenswrapper[4832]: I0131 05:41:23.160634 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f8cc578f-3827-4100-aa82-e6cf59602353/rabbitmq/0.log" Jan 31 05:41:23 crc kubenswrapper[4832]: I0131 05:41:23.290322 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f59551b3-d149-4bf1-90e2-428e0615f1ce/setup-container/0.log" Jan 31 05:41:23 crc kubenswrapper[4832]: I0131 05:41:23.363749 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f59551b3-d149-4bf1-90e2-428e0615f1ce/rabbitmq/0.log" Jan 31 05:41:23 crc kubenswrapper[4832]: I0131 05:41:23.437743 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq_cafe239e-692a-4f8c-baf3-94b454ed706d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:23 crc kubenswrapper[4832]: I0131 05:41:23.613528 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-l9vkx_46cb5cd9-ca77-4c57-9d83-b4ef015da993/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:23 crc kubenswrapper[4832]: I0131 05:41:23.632189 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7_96b10887-6c77-4792-ae1d-87209c13b9fc/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:23 crc kubenswrapper[4832]: I0131 05:41:23.822880 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-zzp8g_eac023bd-8a06-4be3-9c44-a29c87e4c44c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:23 crc kubenswrapper[4832]: I0131 05:41:23.920256 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gmglt_26297c57-667f-414b-912c-2bfa05b73299/ssh-known-hosts-edpm-deployment/0.log" Jan 31 05:41:24 crc kubenswrapper[4832]: I0131 05:41:24.123084 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6df44bf7d7-6dwfp_a8780918-f34b-41e5-9ccc-d12823931da5/proxy-httpd/0.log" Jan 31 05:41:24 crc kubenswrapper[4832]: I0131 05:41:24.138877 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6df44bf7d7-6dwfp_a8780918-f34b-41e5-9ccc-d12823931da5/proxy-server/0.log" Jan 31 05:41:24 crc kubenswrapper[4832]: I0131 05:41:24.319442 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/account-auditor/0.log" Jan 31 05:41:24 crc kubenswrapper[4832]: I0131 05:41:24.352980 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-bwd8q_2d790d64-4815-452e-9f17-13b1b9b75c35/swift-ring-rebalance/0.log" Jan 31 05:41:24 crc kubenswrapper[4832]: I0131 05:41:24.445737 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/account-reaper/0.log" Jan 31 05:41:24 crc kubenswrapper[4832]: I0131 05:41:24.577984 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/container-auditor/0.log" Jan 31 05:41:24 crc kubenswrapper[4832]: I0131 05:41:24.627553 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/account-server/0.log" Jan 31 05:41:24 crc kubenswrapper[4832]: I0131 05:41:24.675534 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/account-replicator/0.log" Jan 31 05:41:24 crc kubenswrapper[4832]: I0131 05:41:24.687239 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/container-replicator/0.log" Jan 31 05:41:24 crc kubenswrapper[4832]: I0131 05:41:24.769758 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/container-server/0.log" Jan 31 05:41:24 crc kubenswrapper[4832]: I0131 05:41:24.823774 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/container-updater/0.log" Jan 31 05:41:24 crc kubenswrapper[4832]: I0131 05:41:24.889952 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/object-auditor/0.log" Jan 31 05:41:24 crc kubenswrapper[4832]: I0131 05:41:24.993946 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/object-replicator/0.log" Jan 31 05:41:24 crc kubenswrapper[4832]: I0131 05:41:24.997538 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/object-expirer/0.log" Jan 31 05:41:25 crc kubenswrapper[4832]: I0131 05:41:25.062409 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/object-server/0.log" Jan 31 05:41:25 crc kubenswrapper[4832]: I0131 05:41:25.134976 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/object-updater/0.log" Jan 31 05:41:25 crc kubenswrapper[4832]: I0131 05:41:25.217816 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/rsync/0.log" Jan 31 05:41:25 crc kubenswrapper[4832]: I0131 05:41:25.259964 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/swift-recon-cron/0.log" Jan 31 05:41:25 crc kubenswrapper[4832]: I0131 05:41:25.437839 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6_02aa5c8f-25f9-43a0-9d6e-dd67d7348443/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:25 crc kubenswrapper[4832]: I0131 05:41:25.499276 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_cf637281-101a-4e11-93b6-74f55d914798/tempest-tests-tempest-tests-runner/0.log" Jan 31 05:41:25 crc kubenswrapper[4832]: I0131 05:41:25.634397 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9b96ff1a-9380-4bdc-a490-bdfa6e760792/test-operator-logs-container/0.log" Jan 31 05:41:25 crc kubenswrapper[4832]: I0131 05:41:25.740213 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc_945ad601-23f1-4494-a2a8-6bf53b841d2f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:41:35 crc kubenswrapper[4832]: I0131 05:41:35.442890 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_86b5161e-fa9c-4b0d-9549-2ab191b90e33/memcached/0.log" Jan 31 05:41:50 crc kubenswrapper[4832]: I0131 05:41:50.373357 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-zr7l4_0e40cd6e-2cdb-4a24-82d8-d27fc4feb14d/manager/0.log" Jan 31 05:41:50 crc kubenswrapper[4832]: I0131 05:41:50.593895 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-5hl82_c20a8bcb-8431-4318-9e7d-8f4ccaddfa8b/manager/0.log" Jan 31 05:41:50 crc kubenswrapper[4832]: I0131 05:41:50.619034 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-gc6zx_805b9f0e-cb57-4b71-b199-b8ee289af169/manager/0.log" Jan 31 05:41:50 crc kubenswrapper[4832]: I0131 05:41:50.791439 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm_9ddd6955-162a-4923-b841-4eb20989be7f/util/0.log" Jan 31 05:41:50 crc kubenswrapper[4832]: I0131 05:41:50.946567 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm_9ddd6955-162a-4923-b841-4eb20989be7f/util/0.log" Jan 31 05:41:50 crc kubenswrapper[4832]: I0131 05:41:50.964285 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm_9ddd6955-162a-4923-b841-4eb20989be7f/pull/0.log" Jan 31 05:41:50 crc kubenswrapper[4832]: I0131 05:41:50.966121 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm_9ddd6955-162a-4923-b841-4eb20989be7f/pull/0.log" Jan 31 05:41:51 crc kubenswrapper[4832]: I0131 05:41:51.142756 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm_9ddd6955-162a-4923-b841-4eb20989be7f/util/0.log" Jan 31 05:41:51 crc kubenswrapper[4832]: I0131 05:41:51.192296 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm_9ddd6955-162a-4923-b841-4eb20989be7f/pull/0.log" Jan 31 05:41:51 crc kubenswrapper[4832]: I0131 05:41:51.193658 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm_9ddd6955-162a-4923-b841-4eb20989be7f/extract/0.log" Jan 31 05:41:51 crc kubenswrapper[4832]: I0131 05:41:51.382900 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-pnktz_9d690743-c300-46c9-83d7-c416ba5aff83/manager/0.log" Jan 31 05:41:51 crc kubenswrapper[4832]: I0131 05:41:51.447799 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-2hr2t_141c81b8-f2f6-4f96-9ac7-83305f4eabd0/manager/0.log" Jan 31 05:41:51 crc kubenswrapper[4832]: I0131 05:41:51.582894 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-4m8x2_b23fff55-653f-417a-9f77-d7b115586ade/manager/0.log" Jan 31 05:41:51 crc kubenswrapper[4832]: I0131 05:41:51.840304 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-4445f_830a5967-5b56-4c70-8940-ef90cd945807/manager/0.log" Jan 31 05:41:51 crc kubenswrapper[4832]: I0131 05:41:51.881640 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57997b5fcd-hjsbn_48118fb9-dcf4-45f5-8096-c558f980eab4/manager/0.log" Jan 31 05:41:52 crc kubenswrapper[4832]: I0131 05:41:52.057496 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-jl4p7_0ac9e56c-c068-4fec-98d8-8d44e1fa6ccd/manager/0.log" Jan 31 05:41:52 crc kubenswrapper[4832]: I0131 05:41:52.081099 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-jtr9c_ea8d7014-c1f0-4b4f-aa01-7865124c3187/manager/0.log" Jan 31 05:41:52 crc kubenswrapper[4832]: I0131 05:41:52.244371 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-fdxks_7c8b7b2a-0a2f-4a69-b538-6afa0fb7e138/manager/0.log" Jan 31 05:41:52 crc kubenswrapper[4832]: I0131 05:41:52.308646 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-c9pxb_e7569d69-3ad6-4127-a49f-a16706a35099/manager/0.log" Jan 31 05:41:52 crc kubenswrapper[4832]: I0131 05:41:52.497853 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-jcrbt_d86d3d02-d07f-4bf0-a01a-18652faa5111/manager/0.log" Jan 31 05:41:52 crc kubenswrapper[4832]: I0131 05:41:52.539991 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-8xwhl_049ad615-904c-4043-b395-dd242e743140/manager/0.log" Jan 31 05:41:52 crc kubenswrapper[4832]: I0131 05:41:52.826792 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6cbc497cdb-zc65j_f76fb23f-871d-459c-b196-8e33703f7e44/operator/0.log" Jan 31 05:41:52 crc kubenswrapper[4832]: I0131 05:41:52.940113 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx_f59551da-68de-4704-98fd-d9355e69c5af/manager/0.log" Jan 31 05:41:53 crc kubenswrapper[4832]: I0131 05:41:53.161919 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-sxhvd_09997099-ea53-4947-b5ec-eaed51db7a12/registry-server/0.log" Jan 31 05:41:53 crc kubenswrapper[4832]: I0131 05:41:53.456019 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-t6wm8_6d39a5f9-a0b0-4a9e-871b-30a307adfd3d/manager/0.log" Jan 31 05:41:53 crc kubenswrapper[4832]: I0131 05:41:53.494198 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-9ptdq_acc59b3c-877b-4a0e-a118-76a05d362ad5/manager/0.log" Jan 31 05:41:53 crc kubenswrapper[4832]: I0131 05:41:53.703001 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8sg7w_f4c4da15-1fd0-4a3f-962e-6a9c4ce10cf3/operator/0.log" Jan 31 05:41:53 crc kubenswrapper[4832]: I0131 05:41:53.944452 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-m49v4_aec1c2a0-52b1-4a2e-8986-1e12be79d67c/manager/0.log" Jan 31 05:41:54 crc kubenswrapper[4832]: I0131 05:41:54.023260 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-r2nnw_551db33b-8ad9-4a8c-9275-2c19c1104232/manager/0.log" Jan 31 05:41:54 crc kubenswrapper[4832]: I0131 05:41:54.235233 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-2vw9x_9ba96940-214f-41b4-a1a2-ecdeced92715/manager/0.log" Jan 31 05:41:54 crc kubenswrapper[4832]: I0131 05:41:54.245947 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-68ffd75798-45z7h_fedc767a-c749-4373-84ab-c32673c34e40/manager/0.log" Jan 31 05:41:54 crc kubenswrapper[4832]: I0131 05:41:54.351748 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-7t5cl_e6c1771e-5f66-444c-8718-e6022bbbb473/manager/0.log" Jan 31 05:42:15 crc kubenswrapper[4832]: I0131 05:42:15.238914 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-9kpwz_5797b6b7-298b-4e04-8945-0a733f37feaa/control-plane-machine-set-operator/0.log" Jan 31 05:42:15 crc kubenswrapper[4832]: I0131 05:42:15.423408 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vn2qs_9ed37686-689b-46e5-8069-0e4de3519afb/kube-rbac-proxy/0.log" Jan 31 05:42:15 crc kubenswrapper[4832]: I0131 05:42:15.474416 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vn2qs_9ed37686-689b-46e5-8069-0e4de3519afb/machine-api-operator/0.log" Jan 31 05:42:28 crc kubenswrapper[4832]: I0131 05:42:28.826400 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xqw4t_c0e95955-9451-49d7-89f9-daff9bd04f21/cert-manager-controller/0.log" Jan 31 05:42:29 crc kubenswrapper[4832]: I0131 05:42:29.059071 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-465dm_82dfe439-0519-43f4-867f-b68944898393/cert-manager-cainjector/0.log" Jan 31 05:42:29 crc kubenswrapper[4832]: I0131 05:42:29.084696 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-26nzk_0ebb0bad-994a-4c2a-b9d2-21f38ee3939a/cert-manager-webhook/0.log" Jan 31 05:42:42 crc kubenswrapper[4832]: I0131 05:42:42.451321 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-nl2wt_6abddfff-a35d-4b7a-aeba-354c6b045b6f/nmstate-console-plugin/0.log" Jan 31 05:42:42 crc kubenswrapper[4832]: I0131 05:42:42.749284 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-z9vf4_f14f6771-126c-41a5-9810-7e4ed01aae96/nmstate-handler/0.log" Jan 31 05:42:42 crc kubenswrapper[4832]: I0131 05:42:42.835665 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-m2d42_668e7e0f-218c-48f9-a40b-13f83d5bf7b9/kube-rbac-proxy/0.log" Jan 31 05:42:42 crc kubenswrapper[4832]: I0131 05:42:42.891522 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-m2d42_668e7e0f-218c-48f9-a40b-13f83d5bf7b9/nmstate-metrics/0.log" Jan 31 05:42:43 crc kubenswrapper[4832]: I0131 05:42:43.017746 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-wgqxf_a73e7d2a-36f0-49e9-82ab-11ede6b1761b/nmstate-operator/0.log" Jan 31 05:42:43 crc kubenswrapper[4832]: I0131 05:42:43.100136 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-nwkdn_c78413bf-08b7-4f63-b849-6206713fe6af/nmstate-webhook/0.log" Jan 31 05:42:48 crc kubenswrapper[4832]: I0131 05:42:48.540144 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:42:48 crc kubenswrapper[4832]: I0131 05:42:48.540898 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:43:10 crc kubenswrapper[4832]: I0131 05:43:10.374571 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-d4lls_4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5/kube-rbac-proxy/0.log" Jan 31 05:43:10 crc kubenswrapper[4832]: I0131 05:43:10.571371 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-frr-files/0.log" Jan 31 05:43:10 crc kubenswrapper[4832]: I0131 05:43:10.586721 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-d4lls_4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5/controller/0.log" Jan 31 05:43:10 crc kubenswrapper[4832]: I0131 05:43:10.765952 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-frr-files/0.log" Jan 31 05:43:10 crc kubenswrapper[4832]: I0131 05:43:10.787356 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-reloader/0.log" Jan 31 05:43:10 crc kubenswrapper[4832]: I0131 05:43:10.793154 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-metrics/0.log" Jan 31 05:43:10 crc kubenswrapper[4832]: I0131 05:43:10.844397 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-reloader/0.log" Jan 31 05:43:10 crc kubenswrapper[4832]: I0131 05:43:10.940232 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-frr-files/0.log" Jan 31 05:43:10 crc kubenswrapper[4832]: I0131 05:43:10.960372 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-reloader/0.log" Jan 31 05:43:10 crc kubenswrapper[4832]: I0131 05:43:10.970251 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-metrics/0.log" Jan 31 05:43:11 crc kubenswrapper[4832]: I0131 05:43:11.005113 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-metrics/0.log" Jan 31 05:43:11 crc kubenswrapper[4832]: I0131 05:43:11.173186 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-reloader/0.log" Jan 31 05:43:11 crc kubenswrapper[4832]: I0131 05:43:11.176557 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-metrics/0.log" Jan 31 05:43:11 crc kubenswrapper[4832]: I0131 05:43:11.180461 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-frr-files/0.log" Jan 31 05:43:11 crc kubenswrapper[4832]: I0131 05:43:11.250945 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/controller/0.log" Jan 31 05:43:11 crc kubenswrapper[4832]: I0131 05:43:11.432485 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/kube-rbac-proxy/0.log" Jan 31 05:43:11 crc kubenswrapper[4832]: I0131 05:43:11.432947 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/frr-metrics/0.log" Jan 31 05:43:11 crc kubenswrapper[4832]: I0131 05:43:11.461078 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/kube-rbac-proxy-frr/0.log" Jan 31 05:43:11 crc kubenswrapper[4832]: I0131 05:43:11.643679 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/reloader/0.log" Jan 31 05:43:11 crc kubenswrapper[4832]: I0131 05:43:11.704730 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-wd6p6_c10722d0-a029-4829-87c5-3f4340ea19ff/frr-k8s-webhook-server/0.log" Jan 31 05:43:11 crc kubenswrapper[4832]: I0131 05:43:11.891770 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8567bf5564-5cjq7_873c6fd7-9f23-4376-96f6-3e8a19b56593/manager/0.log" Jan 31 05:43:12 crc kubenswrapper[4832]: I0131 05:43:12.082190 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-69fbdc97fc-wxfb2_01a0d1ce-012e-4200-ac92-995c0f1a2d1c/webhook-server/0.log" Jan 31 05:43:12 crc kubenswrapper[4832]: I0131 05:43:12.180173 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cbkcm_b8624b4c-df9a-43b3-8f8c-99b9290a7956/kube-rbac-proxy/0.log" Jan 31 05:43:12 crc kubenswrapper[4832]: I0131 05:43:12.806130 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cbkcm_b8624b4c-df9a-43b3-8f8c-99b9290a7956/speaker/0.log" Jan 31 05:43:12 crc kubenswrapper[4832]: I0131 05:43:12.970369 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/frr/0.log" Jan 31 05:43:18 crc kubenswrapper[4832]: I0131 05:43:18.539530 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:43:18 crc kubenswrapper[4832]: I0131 05:43:18.540648 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:43:24 crc kubenswrapper[4832]: I0131 05:43:24.877191 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk_265071a1-233b-4945-b4d0-e6f20a6b4ab2/util/0.log" Jan 31 05:43:24 crc kubenswrapper[4832]: I0131 05:43:24.998476 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk_265071a1-233b-4945-b4d0-e6f20a6b4ab2/util/0.log" Jan 31 05:43:25 crc kubenswrapper[4832]: I0131 05:43:25.051404 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk_265071a1-233b-4945-b4d0-e6f20a6b4ab2/pull/0.log" Jan 31 05:43:25 crc kubenswrapper[4832]: I0131 05:43:25.080362 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk_265071a1-233b-4945-b4d0-e6f20a6b4ab2/pull/0.log" Jan 31 05:43:25 crc kubenswrapper[4832]: I0131 05:43:25.260873 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk_265071a1-233b-4945-b4d0-e6f20a6b4ab2/util/0.log" Jan 31 05:43:25 crc kubenswrapper[4832]: I0131 05:43:25.264255 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk_265071a1-233b-4945-b4d0-e6f20a6b4ab2/pull/0.log" Jan 31 05:43:25 crc kubenswrapper[4832]: I0131 05:43:25.283222 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk_265071a1-233b-4945-b4d0-e6f20a6b4ab2/extract/0.log" Jan 31 05:43:25 crc kubenswrapper[4832]: I0131 05:43:25.413777 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p_8c73b6e9-4228-4e24-bdd7-18f6980c3bc7/util/0.log" Jan 31 05:43:25 crc kubenswrapper[4832]: I0131 05:43:25.591641 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p_8c73b6e9-4228-4e24-bdd7-18f6980c3bc7/util/0.log" Jan 31 05:43:25 crc kubenswrapper[4832]: I0131 05:43:25.605531 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p_8c73b6e9-4228-4e24-bdd7-18f6980c3bc7/pull/0.log" Jan 31 05:43:25 crc kubenswrapper[4832]: I0131 05:43:25.641765 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p_8c73b6e9-4228-4e24-bdd7-18f6980c3bc7/pull/0.log" Jan 31 05:43:25 crc kubenswrapper[4832]: I0131 05:43:25.775947 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p_8c73b6e9-4228-4e24-bdd7-18f6980c3bc7/pull/0.log" Jan 31 05:43:25 crc kubenswrapper[4832]: I0131 05:43:25.787760 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p_8c73b6e9-4228-4e24-bdd7-18f6980c3bc7/util/0.log" Jan 31 05:43:25 crc kubenswrapper[4832]: I0131 05:43:25.825310 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p_8c73b6e9-4228-4e24-bdd7-18f6980c3bc7/extract/0.log" Jan 31 05:43:25 crc kubenswrapper[4832]: I0131 05:43:25.959635 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-crxls_f9f544b6-248c-4f10-8c30-4a976fb6a35c/extract-utilities/0.log" Jan 31 05:43:26 crc kubenswrapper[4832]: I0131 05:43:26.099071 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-crxls_f9f544b6-248c-4f10-8c30-4a976fb6a35c/extract-utilities/0.log" Jan 31 05:43:26 crc kubenswrapper[4832]: I0131 05:43:26.121276 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-crxls_f9f544b6-248c-4f10-8c30-4a976fb6a35c/extract-content/0.log" Jan 31 05:43:26 crc kubenswrapper[4832]: I0131 05:43:26.121340 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-crxls_f9f544b6-248c-4f10-8c30-4a976fb6a35c/extract-content/0.log" Jan 31 05:43:26 crc kubenswrapper[4832]: I0131 05:43:26.281068 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-crxls_f9f544b6-248c-4f10-8c30-4a976fb6a35c/extract-content/0.log" Jan 31 05:43:26 crc kubenswrapper[4832]: I0131 05:43:26.281292 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-crxls_f9f544b6-248c-4f10-8c30-4a976fb6a35c/extract-utilities/0.log" Jan 31 05:43:26 crc kubenswrapper[4832]: I0131 05:43:26.486734 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgrt2_271e0384-a8f3-41b0-a543-28210590699c/extract-utilities/0.log" Jan 31 05:43:26 crc kubenswrapper[4832]: I0131 05:43:26.688918 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgrt2_271e0384-a8f3-41b0-a543-28210590699c/extract-content/0.log" Jan 31 05:43:26 crc kubenswrapper[4832]: I0131 05:43:26.711482 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgrt2_271e0384-a8f3-41b0-a543-28210590699c/extract-content/0.log" Jan 31 05:43:26 crc kubenswrapper[4832]: I0131 05:43:26.749936 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgrt2_271e0384-a8f3-41b0-a543-28210590699c/extract-utilities/0.log" Jan 31 05:43:26 crc kubenswrapper[4832]: I0131 05:43:26.922267 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgrt2_271e0384-a8f3-41b0-a543-28210590699c/extract-content/0.log" Jan 31 05:43:26 crc kubenswrapper[4832]: I0131 05:43:26.941919 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgrt2_271e0384-a8f3-41b0-a543-28210590699c/extract-utilities/0.log" Jan 31 05:43:27 crc kubenswrapper[4832]: I0131 05:43:27.142999 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tglv7_a23cd15a-ae33-49a9-bf22-0f0e4786b18f/marketplace-operator/0.log" Jan 31 05:43:27 crc kubenswrapper[4832]: I0131 05:43:27.229037 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-crxls_f9f544b6-248c-4f10-8c30-4a976fb6a35c/registry-server/0.log" Jan 31 05:43:27 crc kubenswrapper[4832]: I0131 05:43:27.400523 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cqt2r_9a66457d-ec6e-439a-894d-a2ce2519bf0c/extract-utilities/0.log" Jan 31 05:43:27 crc kubenswrapper[4832]: I0131 05:43:27.665857 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cqt2r_9a66457d-ec6e-439a-894d-a2ce2519bf0c/extract-content/0.log" Jan 31 05:43:27 crc kubenswrapper[4832]: I0131 05:43:27.721460 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgrt2_271e0384-a8f3-41b0-a543-28210590699c/registry-server/0.log" Jan 31 05:43:27 crc kubenswrapper[4832]: I0131 05:43:27.726449 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cqt2r_9a66457d-ec6e-439a-894d-a2ce2519bf0c/extract-content/0.log" Jan 31 05:43:27 crc kubenswrapper[4832]: I0131 05:43:27.791435 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cqt2r_9a66457d-ec6e-439a-894d-a2ce2519bf0c/extract-utilities/0.log" Jan 31 05:43:27 crc kubenswrapper[4832]: I0131 05:43:27.887679 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cqt2r_9a66457d-ec6e-439a-894d-a2ce2519bf0c/extract-content/0.log" Jan 31 05:43:27 crc kubenswrapper[4832]: I0131 05:43:27.892483 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cqt2r_9a66457d-ec6e-439a-894d-a2ce2519bf0c/extract-utilities/0.log" Jan 31 05:43:28 crc kubenswrapper[4832]: I0131 05:43:28.079976 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cqt2r_9a66457d-ec6e-439a-894d-a2ce2519bf0c/registry-server/0.log" Jan 31 05:43:28 crc kubenswrapper[4832]: I0131 05:43:28.092897 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zk6kb_68108ffd-eb09-4ae3-a4d4-0316d20d0feb/extract-utilities/0.log" Jan 31 05:43:28 crc kubenswrapper[4832]: I0131 05:43:28.274693 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zk6kb_68108ffd-eb09-4ae3-a4d4-0316d20d0feb/extract-utilities/0.log" Jan 31 05:43:28 crc kubenswrapper[4832]: I0131 05:43:28.275388 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zk6kb_68108ffd-eb09-4ae3-a4d4-0316d20d0feb/extract-content/0.log" Jan 31 05:43:28 crc kubenswrapper[4832]: I0131 05:43:28.287466 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zk6kb_68108ffd-eb09-4ae3-a4d4-0316d20d0feb/extract-content/0.log" Jan 31 05:43:28 crc kubenswrapper[4832]: I0131 05:43:28.602970 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zk6kb_68108ffd-eb09-4ae3-a4d4-0316d20d0feb/extract-content/0.log" Jan 31 05:43:28 crc kubenswrapper[4832]: I0131 05:43:28.610703 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zk6kb_68108ffd-eb09-4ae3-a4d4-0316d20d0feb/extract-utilities/0.log" Jan 31 05:43:29 crc kubenswrapper[4832]: I0131 05:43:29.270827 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zk6kb_68108ffd-eb09-4ae3-a4d4-0316d20d0feb/registry-server/0.log" Jan 31 05:43:48 crc kubenswrapper[4832]: I0131 05:43:48.540134 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:43:48 crc kubenswrapper[4832]: I0131 05:43:48.540756 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:43:48 crc kubenswrapper[4832]: I0131 05:43:48.540812 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 05:43:48 crc kubenswrapper[4832]: I0131 05:43:48.541601 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:43:48 crc kubenswrapper[4832]: I0131 05:43:48.541654 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" gracePeriod=600 Jan 31 05:43:48 crc kubenswrapper[4832]: E0131 05:43:48.736616 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:43:48 crc kubenswrapper[4832]: I0131 05:43:48.953213 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" exitCode=0 Jan 31 05:43:48 crc kubenswrapper[4832]: I0131 05:43:48.953281 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522"} Jan 31 05:43:48 crc kubenswrapper[4832]: I0131 05:43:48.953339 4832 scope.go:117] "RemoveContainer" containerID="f5f05e052d2c0ca776347fca83a19c093bdcddf6eb7b337619933b8b58eba1c6" Jan 31 05:43:48 crc kubenswrapper[4832]: I0131 05:43:48.954279 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:43:48 crc kubenswrapper[4832]: E0131 05:43:48.954642 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:44:01 crc kubenswrapper[4832]: I0131 05:44:01.865716 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:44:01 crc kubenswrapper[4832]: E0131 05:44:01.866495 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:44:16 crc kubenswrapper[4832]: I0131 05:44:16.859624 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:44:16 crc kubenswrapper[4832]: E0131 05:44:16.860779 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:44:31 crc kubenswrapper[4832]: I0131 05:44:31.866423 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:44:31 crc kubenswrapper[4832]: E0131 05:44:31.866930 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:44:45 crc kubenswrapper[4832]: I0131 05:44:45.860334 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:44:45 crc kubenswrapper[4832]: E0131 05:44:45.861333 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:44:57 crc kubenswrapper[4832]: I0131 05:44:57.859885 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:44:57 crc kubenswrapper[4832]: E0131 05:44:57.860692 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.178736 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7"] Jan 31 05:45:00 crc kubenswrapper[4832]: E0131 05:45:00.180090 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b132c923-5869-4011-880a-5b2a04928bac" containerName="container-00" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.180129 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b132c923-5869-4011-880a-5b2a04928bac" containerName="container-00" Jan 31 05:45:00 crc kubenswrapper[4832]: E0131 05:45:00.180165 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="extract-utilities" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.180182 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="extract-utilities" Jan 31 05:45:00 crc kubenswrapper[4832]: E0131 05:45:00.180215 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="extract-content" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.180236 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="extract-content" Jan 31 05:45:00 crc kubenswrapper[4832]: E0131 05:45:00.180278 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="registry-server" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.180295 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="registry-server" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.184673 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="137a77e6-6ae4-4ac1-8a0d-c2367e97e8cb" containerName="registry-server" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.184764 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b132c923-5869-4011-880a-5b2a04928bac" containerName="container-00" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.187776 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.190656 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.191185 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.212245 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7"] Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.365937 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-secret-volume\") pod \"collect-profiles-29497305-279w7\" (UID: \"641faaed-b5fe-46e3-bf7a-04e8855fd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.366039 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smrxj\" (UniqueName: \"kubernetes.io/projected/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-kube-api-access-smrxj\") pod \"collect-profiles-29497305-279w7\" (UID: \"641faaed-b5fe-46e3-bf7a-04e8855fd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.366096 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-config-volume\") pod \"collect-profiles-29497305-279w7\" (UID: \"641faaed-b5fe-46e3-bf7a-04e8855fd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.467470 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-config-volume\") pod \"collect-profiles-29497305-279w7\" (UID: \"641faaed-b5fe-46e3-bf7a-04e8855fd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.467638 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-secret-volume\") pod \"collect-profiles-29497305-279w7\" (UID: \"641faaed-b5fe-46e3-bf7a-04e8855fd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.467691 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smrxj\" (UniqueName: \"kubernetes.io/projected/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-kube-api-access-smrxj\") pod \"collect-profiles-29497305-279w7\" (UID: \"641faaed-b5fe-46e3-bf7a-04e8855fd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.468828 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-config-volume\") pod \"collect-profiles-29497305-279w7\" (UID: \"641faaed-b5fe-46e3-bf7a-04e8855fd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.474439 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-secret-volume\") pod \"collect-profiles-29497305-279w7\" (UID: \"641faaed-b5fe-46e3-bf7a-04e8855fd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.484059 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smrxj\" (UniqueName: \"kubernetes.io/projected/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-kube-api-access-smrxj\") pod \"collect-profiles-29497305-279w7\" (UID: \"641faaed-b5fe-46e3-bf7a-04e8855fd6d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" Jan 31 05:45:00 crc kubenswrapper[4832]: I0131 05:45:00.509143 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" Jan 31 05:45:01 crc kubenswrapper[4832]: I0131 05:45:01.266757 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7"] Jan 31 05:45:01 crc kubenswrapper[4832]: I0131 05:45:01.678128 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" event={"ID":"641faaed-b5fe-46e3-bf7a-04e8855fd6d1","Type":"ContainerStarted","Data":"d1c2d615af2c0967064f4e3bc7367acf2eefd543a5eefffe141caf0eb11d36bc"} Jan 31 05:45:01 crc kubenswrapper[4832]: I0131 05:45:01.678552 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" event={"ID":"641faaed-b5fe-46e3-bf7a-04e8855fd6d1","Type":"ContainerStarted","Data":"e89f5a674d0d59dd730055f700d11cef62297e8caed2afded67d16bb9dd785fd"} Jan 31 05:45:01 crc kubenswrapper[4832]: I0131 05:45:01.699577 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" podStartSLOduration=1.699543204 podStartE2EDuration="1.699543204s" podCreationTimestamp="2026-01-31 05:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:45:01.69107831 +0000 UTC m=+3710.639899995" watchObservedRunningTime="2026-01-31 05:45:01.699543204 +0000 UTC m=+3710.648364879" Jan 31 05:45:02 crc kubenswrapper[4832]: I0131 05:45:02.692645 4832 generic.go:334] "Generic (PLEG): container finished" podID="641faaed-b5fe-46e3-bf7a-04e8855fd6d1" containerID="d1c2d615af2c0967064f4e3bc7367acf2eefd543a5eefffe141caf0eb11d36bc" exitCode=0 Jan 31 05:45:02 crc kubenswrapper[4832]: I0131 05:45:02.692783 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" event={"ID":"641faaed-b5fe-46e3-bf7a-04e8855fd6d1","Type":"ContainerDied","Data":"d1c2d615af2c0967064f4e3bc7367acf2eefd543a5eefffe141caf0eb11d36bc"} Jan 31 05:45:04 crc kubenswrapper[4832]: I0131 05:45:04.024918 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" Jan 31 05:45:04 crc kubenswrapper[4832]: I0131 05:45:04.149500 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-secret-volume\") pod \"641faaed-b5fe-46e3-bf7a-04e8855fd6d1\" (UID: \"641faaed-b5fe-46e3-bf7a-04e8855fd6d1\") " Jan 31 05:45:04 crc kubenswrapper[4832]: I0131 05:45:04.149615 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smrxj\" (UniqueName: \"kubernetes.io/projected/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-kube-api-access-smrxj\") pod \"641faaed-b5fe-46e3-bf7a-04e8855fd6d1\" (UID: \"641faaed-b5fe-46e3-bf7a-04e8855fd6d1\") " Jan 31 05:45:04 crc kubenswrapper[4832]: I0131 05:45:04.149797 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-config-volume\") pod \"641faaed-b5fe-46e3-bf7a-04e8855fd6d1\" (UID: \"641faaed-b5fe-46e3-bf7a-04e8855fd6d1\") " Jan 31 05:45:04 crc kubenswrapper[4832]: I0131 05:45:04.150615 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-config-volume" (OuterVolumeSpecName: "config-volume") pod "641faaed-b5fe-46e3-bf7a-04e8855fd6d1" (UID: "641faaed-b5fe-46e3-bf7a-04e8855fd6d1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:45:04 crc kubenswrapper[4832]: I0131 05:45:04.155953 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "641faaed-b5fe-46e3-bf7a-04e8855fd6d1" (UID: "641faaed-b5fe-46e3-bf7a-04e8855fd6d1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:45:04 crc kubenswrapper[4832]: I0131 05:45:04.155998 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-kube-api-access-smrxj" (OuterVolumeSpecName: "kube-api-access-smrxj") pod "641faaed-b5fe-46e3-bf7a-04e8855fd6d1" (UID: "641faaed-b5fe-46e3-bf7a-04e8855fd6d1"). InnerVolumeSpecName "kube-api-access-smrxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:45:04 crc kubenswrapper[4832]: I0131 05:45:04.251928 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:04 crc kubenswrapper[4832]: I0131 05:45:04.252226 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:04 crc kubenswrapper[4832]: I0131 05:45:04.252238 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smrxj\" (UniqueName: \"kubernetes.io/projected/641faaed-b5fe-46e3-bf7a-04e8855fd6d1-kube-api-access-smrxj\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:04 crc kubenswrapper[4832]: I0131 05:45:04.340581 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7"] Jan 31 05:45:04 crc kubenswrapper[4832]: I0131 05:45:04.347817 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497260-rjsp7"] Jan 31 05:45:04 crc kubenswrapper[4832]: I0131 05:45:04.711981 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" event={"ID":"641faaed-b5fe-46e3-bf7a-04e8855fd6d1","Type":"ContainerDied","Data":"e89f5a674d0d59dd730055f700d11cef62297e8caed2afded67d16bb9dd785fd"} Jan 31 05:45:04 crc kubenswrapper[4832]: I0131 05:45:04.712030 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e89f5a674d0d59dd730055f700d11cef62297e8caed2afded67d16bb9dd785fd" Jan 31 05:45:04 crc kubenswrapper[4832]: I0131 05:45:04.712094 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497305-279w7" Jan 31 05:45:05 crc kubenswrapper[4832]: I0131 05:45:05.870179 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="487160a9-724e-4892-a8e6-886547709572" path="/var/lib/kubelet/pods/487160a9-724e-4892-a8e6-886547709572/volumes" Jan 31 05:45:08 crc kubenswrapper[4832]: I0131 05:45:08.861385 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:45:08 crc kubenswrapper[4832]: E0131 05:45:08.862030 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:45:12 crc kubenswrapper[4832]: I0131 05:45:12.786963 4832 generic.go:334] "Generic (PLEG): container finished" podID="312a101b-8f97-4394-b026-07e3ec046016" containerID="7b317c28179170c6910e0d9db2acd1d0dc4b3211735945986703ee1f7c0e9014" exitCode=0 Jan 31 05:45:12 crc kubenswrapper[4832]: I0131 05:45:12.787432 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2qhxc/must-gather-4bcjm" event={"ID":"312a101b-8f97-4394-b026-07e3ec046016","Type":"ContainerDied","Data":"7b317c28179170c6910e0d9db2acd1d0dc4b3211735945986703ee1f7c0e9014"} Jan 31 05:45:12 crc kubenswrapper[4832]: I0131 05:45:12.788405 4832 scope.go:117] "RemoveContainer" containerID="7b317c28179170c6910e0d9db2acd1d0dc4b3211735945986703ee1f7c0e9014" Jan 31 05:45:13 crc kubenswrapper[4832]: I0131 05:45:13.699412 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2qhxc_must-gather-4bcjm_312a101b-8f97-4394-b026-07e3ec046016/gather/0.log" Jan 31 05:45:20 crc kubenswrapper[4832]: I0131 05:45:20.935371 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2qhxc/must-gather-4bcjm"] Jan 31 05:45:20 crc kubenswrapper[4832]: I0131 05:45:20.936271 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2qhxc/must-gather-4bcjm" podUID="312a101b-8f97-4394-b026-07e3ec046016" containerName="copy" containerID="cri-o://0e72b1e158f8cc6712f7e26c5582c0077e09c9a6eb4f0f6ef87eabab1063274c" gracePeriod=2 Jan 31 05:45:20 crc kubenswrapper[4832]: I0131 05:45:20.946406 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2qhxc/must-gather-4bcjm"] Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.342087 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2qhxc_must-gather-4bcjm_312a101b-8f97-4394-b026-07e3ec046016/copy/0.log" Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.342884 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/must-gather-4bcjm" Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.436473 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/312a101b-8f97-4394-b026-07e3ec046016-must-gather-output\") pod \"312a101b-8f97-4394-b026-07e3ec046016\" (UID: \"312a101b-8f97-4394-b026-07e3ec046016\") " Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.436992 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp7dh\" (UniqueName: \"kubernetes.io/projected/312a101b-8f97-4394-b026-07e3ec046016-kube-api-access-xp7dh\") pod \"312a101b-8f97-4394-b026-07e3ec046016\" (UID: \"312a101b-8f97-4394-b026-07e3ec046016\") " Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.442322 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312a101b-8f97-4394-b026-07e3ec046016-kube-api-access-xp7dh" (OuterVolumeSpecName: "kube-api-access-xp7dh") pod "312a101b-8f97-4394-b026-07e3ec046016" (UID: "312a101b-8f97-4394-b026-07e3ec046016"). InnerVolumeSpecName "kube-api-access-xp7dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.540060 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp7dh\" (UniqueName: \"kubernetes.io/projected/312a101b-8f97-4394-b026-07e3ec046016-kube-api-access-xp7dh\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.585888 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/312a101b-8f97-4394-b026-07e3ec046016-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "312a101b-8f97-4394-b026-07e3ec046016" (UID: "312a101b-8f97-4394-b026-07e3ec046016"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.641912 4832 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/312a101b-8f97-4394-b026-07e3ec046016-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.871698 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312a101b-8f97-4394-b026-07e3ec046016" path="/var/lib/kubelet/pods/312a101b-8f97-4394-b026-07e3ec046016/volumes" Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.879256 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2qhxc_must-gather-4bcjm_312a101b-8f97-4394-b026-07e3ec046016/copy/0.log" Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.879617 4832 generic.go:334] "Generic (PLEG): container finished" podID="312a101b-8f97-4394-b026-07e3ec046016" containerID="0e72b1e158f8cc6712f7e26c5582c0077e09c9a6eb4f0f6ef87eabab1063274c" exitCode=143 Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.879676 4832 scope.go:117] "RemoveContainer" containerID="0e72b1e158f8cc6712f7e26c5582c0077e09c9a6eb4f0f6ef87eabab1063274c" Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.879807 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2qhxc/must-gather-4bcjm" Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.904993 4832 scope.go:117] "RemoveContainer" containerID="7b317c28179170c6910e0d9db2acd1d0dc4b3211735945986703ee1f7c0e9014" Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.988717 4832 scope.go:117] "RemoveContainer" containerID="0e72b1e158f8cc6712f7e26c5582c0077e09c9a6eb4f0f6ef87eabab1063274c" Jan 31 05:45:21 crc kubenswrapper[4832]: E0131 05:45:21.989252 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e72b1e158f8cc6712f7e26c5582c0077e09c9a6eb4f0f6ef87eabab1063274c\": container with ID starting with 0e72b1e158f8cc6712f7e26c5582c0077e09c9a6eb4f0f6ef87eabab1063274c not found: ID does not exist" containerID="0e72b1e158f8cc6712f7e26c5582c0077e09c9a6eb4f0f6ef87eabab1063274c" Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.989294 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e72b1e158f8cc6712f7e26c5582c0077e09c9a6eb4f0f6ef87eabab1063274c"} err="failed to get container status \"0e72b1e158f8cc6712f7e26c5582c0077e09c9a6eb4f0f6ef87eabab1063274c\": rpc error: code = NotFound desc = could not find container \"0e72b1e158f8cc6712f7e26c5582c0077e09c9a6eb4f0f6ef87eabab1063274c\": container with ID starting with 0e72b1e158f8cc6712f7e26c5582c0077e09c9a6eb4f0f6ef87eabab1063274c not found: ID does not exist" Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.989316 4832 scope.go:117] "RemoveContainer" containerID="7b317c28179170c6910e0d9db2acd1d0dc4b3211735945986703ee1f7c0e9014" Jan 31 05:45:21 crc kubenswrapper[4832]: E0131 05:45:21.989733 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b317c28179170c6910e0d9db2acd1d0dc4b3211735945986703ee1f7c0e9014\": container with ID starting with 7b317c28179170c6910e0d9db2acd1d0dc4b3211735945986703ee1f7c0e9014 not found: ID does not exist" containerID="7b317c28179170c6910e0d9db2acd1d0dc4b3211735945986703ee1f7c0e9014" Jan 31 05:45:21 crc kubenswrapper[4832]: I0131 05:45:21.989787 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b317c28179170c6910e0d9db2acd1d0dc4b3211735945986703ee1f7c0e9014"} err="failed to get container status \"7b317c28179170c6910e0d9db2acd1d0dc4b3211735945986703ee1f7c0e9014\": rpc error: code = NotFound desc = could not find container \"7b317c28179170c6910e0d9db2acd1d0dc4b3211735945986703ee1f7c0e9014\": container with ID starting with 7b317c28179170c6910e0d9db2acd1d0dc4b3211735945986703ee1f7c0e9014 not found: ID does not exist" Jan 31 05:45:22 crc kubenswrapper[4832]: I0131 05:45:22.859837 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:45:22 crc kubenswrapper[4832]: E0131 05:45:22.860271 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:45:26 crc kubenswrapper[4832]: I0131 05:45:26.669076 4832 scope.go:117] "RemoveContainer" containerID="0b3f23d1b3d272baabb0ae3c3267521a9f733dae53e3a534cc0cfb7c69598ea4" Jan 31 05:45:33 crc kubenswrapper[4832]: I0131 05:45:33.859224 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:45:33 crc kubenswrapper[4832]: E0131 05:45:33.860067 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:45:46 crc kubenswrapper[4832]: I0131 05:45:46.859540 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:45:46 crc kubenswrapper[4832]: E0131 05:45:46.861050 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.391416 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6mtdq"] Jan 31 05:45:48 crc kubenswrapper[4832]: E0131 05:45:48.392304 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312a101b-8f97-4394-b026-07e3ec046016" containerName="copy" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.392323 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="312a101b-8f97-4394-b026-07e3ec046016" containerName="copy" Jan 31 05:45:48 crc kubenswrapper[4832]: E0131 05:45:48.392380 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="641faaed-b5fe-46e3-bf7a-04e8855fd6d1" containerName="collect-profiles" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.392390 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="641faaed-b5fe-46e3-bf7a-04e8855fd6d1" containerName="collect-profiles" Jan 31 05:45:48 crc kubenswrapper[4832]: E0131 05:45:48.392401 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312a101b-8f97-4394-b026-07e3ec046016" containerName="gather" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.392412 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="312a101b-8f97-4394-b026-07e3ec046016" containerName="gather" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.392682 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="312a101b-8f97-4394-b026-07e3ec046016" containerName="copy" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.392706 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="641faaed-b5fe-46e3-bf7a-04e8855fd6d1" containerName="collect-profiles" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.392726 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="312a101b-8f97-4394-b026-07e3ec046016" containerName="gather" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.394457 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.402125 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mtdq"] Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.528365 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10cbf1cd-b865-4f72-9714-d9a23f8e0276-utilities\") pod \"redhat-marketplace-6mtdq\" (UID: \"10cbf1cd-b865-4f72-9714-d9a23f8e0276\") " pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.528460 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10cbf1cd-b865-4f72-9714-d9a23f8e0276-catalog-content\") pod \"redhat-marketplace-6mtdq\" (UID: \"10cbf1cd-b865-4f72-9714-d9a23f8e0276\") " pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.529178 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhdm8\" (UniqueName: \"kubernetes.io/projected/10cbf1cd-b865-4f72-9714-d9a23f8e0276-kube-api-access-rhdm8\") pod \"redhat-marketplace-6mtdq\" (UID: \"10cbf1cd-b865-4f72-9714-d9a23f8e0276\") " pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.631239 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10cbf1cd-b865-4f72-9714-d9a23f8e0276-utilities\") pod \"redhat-marketplace-6mtdq\" (UID: \"10cbf1cd-b865-4f72-9714-d9a23f8e0276\") " pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.631343 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10cbf1cd-b865-4f72-9714-d9a23f8e0276-catalog-content\") pod \"redhat-marketplace-6mtdq\" (UID: \"10cbf1cd-b865-4f72-9714-d9a23f8e0276\") " pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.631400 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhdm8\" (UniqueName: \"kubernetes.io/projected/10cbf1cd-b865-4f72-9714-d9a23f8e0276-kube-api-access-rhdm8\") pod \"redhat-marketplace-6mtdq\" (UID: \"10cbf1cd-b865-4f72-9714-d9a23f8e0276\") " pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.631830 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10cbf1cd-b865-4f72-9714-d9a23f8e0276-utilities\") pod \"redhat-marketplace-6mtdq\" (UID: \"10cbf1cd-b865-4f72-9714-d9a23f8e0276\") " pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.631877 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10cbf1cd-b865-4f72-9714-d9a23f8e0276-catalog-content\") pod \"redhat-marketplace-6mtdq\" (UID: \"10cbf1cd-b865-4f72-9714-d9a23f8e0276\") " pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.670941 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhdm8\" (UniqueName: \"kubernetes.io/projected/10cbf1cd-b865-4f72-9714-d9a23f8e0276-kube-api-access-rhdm8\") pod \"redhat-marketplace-6mtdq\" (UID: \"10cbf1cd-b865-4f72-9714-d9a23f8e0276\") " pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:45:48 crc kubenswrapper[4832]: I0131 05:45:48.727346 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:45:49 crc kubenswrapper[4832]: I0131 05:45:49.203307 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mtdq"] Jan 31 05:45:50 crc kubenswrapper[4832]: I0131 05:45:50.155884 4832 generic.go:334] "Generic (PLEG): container finished" podID="10cbf1cd-b865-4f72-9714-d9a23f8e0276" containerID="aaa112726093db03ceaebf16d1cea4a854bb5819bc808d65e0c25d2dc2cad9cf" exitCode=0 Jan 31 05:45:50 crc kubenswrapper[4832]: I0131 05:45:50.155945 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mtdq" event={"ID":"10cbf1cd-b865-4f72-9714-d9a23f8e0276","Type":"ContainerDied","Data":"aaa112726093db03ceaebf16d1cea4a854bb5819bc808d65e0c25d2dc2cad9cf"} Jan 31 05:45:50 crc kubenswrapper[4832]: I0131 05:45:50.156244 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mtdq" event={"ID":"10cbf1cd-b865-4f72-9714-d9a23f8e0276","Type":"ContainerStarted","Data":"8007ca26faa7d94c3c8b63921bb3426a38dbd9b55533442db95cd162fdae828b"} Jan 31 05:45:50 crc kubenswrapper[4832]: I0131 05:45:50.161865 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:45:52 crc kubenswrapper[4832]: I0131 05:45:52.174024 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mtdq" event={"ID":"10cbf1cd-b865-4f72-9714-d9a23f8e0276","Type":"ContainerDied","Data":"0361b10eb9210597295717c6d9d6b14c767d10d5c6d49ca550f1847ac584be1e"} Jan 31 05:45:52 crc kubenswrapper[4832]: I0131 05:45:52.174125 4832 generic.go:334] "Generic (PLEG): container finished" podID="10cbf1cd-b865-4f72-9714-d9a23f8e0276" containerID="0361b10eb9210597295717c6d9d6b14c767d10d5c6d49ca550f1847ac584be1e" exitCode=0 Jan 31 05:45:53 crc kubenswrapper[4832]: I0131 05:45:53.183203 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mtdq" event={"ID":"10cbf1cd-b865-4f72-9714-d9a23f8e0276","Type":"ContainerStarted","Data":"29b3d27c143779934179b5d6d4bc4efa87713197bf45488d9fdeba13a8df91b8"} Jan 31 05:45:53 crc kubenswrapper[4832]: I0131 05:45:53.208616 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6mtdq" podStartSLOduration=2.490763719 podStartE2EDuration="5.208593511s" podCreationTimestamp="2026-01-31 05:45:48 +0000 UTC" firstStartedPulling="2026-01-31 05:45:50.161514789 +0000 UTC m=+3759.110336494" lastFinishedPulling="2026-01-31 05:45:52.879344601 +0000 UTC m=+3761.828166286" observedRunningTime="2026-01-31 05:45:53.20216557 +0000 UTC m=+3762.150987255" watchObservedRunningTime="2026-01-31 05:45:53.208593511 +0000 UTC m=+3762.157415196" Jan 31 05:45:57 crc kubenswrapper[4832]: I0131 05:45:57.859649 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:45:57 crc kubenswrapper[4832]: E0131 05:45:57.860757 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:45:58 crc kubenswrapper[4832]: I0131 05:45:58.728069 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:45:58 crc kubenswrapper[4832]: I0131 05:45:58.729182 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:45:58 crc kubenswrapper[4832]: I0131 05:45:58.795237 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:45:59 crc kubenswrapper[4832]: I0131 05:45:59.295437 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:45:59 crc kubenswrapper[4832]: I0131 05:45:59.338487 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mtdq"] Jan 31 05:46:01 crc kubenswrapper[4832]: I0131 05:46:01.265735 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6mtdq" podUID="10cbf1cd-b865-4f72-9714-d9a23f8e0276" containerName="registry-server" containerID="cri-o://29b3d27c143779934179b5d6d4bc4efa87713197bf45488d9fdeba13a8df91b8" gracePeriod=2 Jan 31 05:46:01 crc kubenswrapper[4832]: I0131 05:46:01.781313 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:46:01 crc kubenswrapper[4832]: I0131 05:46:01.808068 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10cbf1cd-b865-4f72-9714-d9a23f8e0276-catalog-content\") pod \"10cbf1cd-b865-4f72-9714-d9a23f8e0276\" (UID: \"10cbf1cd-b865-4f72-9714-d9a23f8e0276\") " Jan 31 05:46:01 crc kubenswrapper[4832]: I0131 05:46:01.808129 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhdm8\" (UniqueName: \"kubernetes.io/projected/10cbf1cd-b865-4f72-9714-d9a23f8e0276-kube-api-access-rhdm8\") pod \"10cbf1cd-b865-4f72-9714-d9a23f8e0276\" (UID: \"10cbf1cd-b865-4f72-9714-d9a23f8e0276\") " Jan 31 05:46:01 crc kubenswrapper[4832]: I0131 05:46:01.808175 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10cbf1cd-b865-4f72-9714-d9a23f8e0276-utilities\") pod \"10cbf1cd-b865-4f72-9714-d9a23f8e0276\" (UID: \"10cbf1cd-b865-4f72-9714-d9a23f8e0276\") " Jan 31 05:46:01 crc kubenswrapper[4832]: I0131 05:46:01.809627 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10cbf1cd-b865-4f72-9714-d9a23f8e0276-utilities" (OuterVolumeSpecName: "utilities") pod "10cbf1cd-b865-4f72-9714-d9a23f8e0276" (UID: "10cbf1cd-b865-4f72-9714-d9a23f8e0276"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:46:01 crc kubenswrapper[4832]: I0131 05:46:01.829191 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10cbf1cd-b865-4f72-9714-d9a23f8e0276-kube-api-access-rhdm8" (OuterVolumeSpecName: "kube-api-access-rhdm8") pod "10cbf1cd-b865-4f72-9714-d9a23f8e0276" (UID: "10cbf1cd-b865-4f72-9714-d9a23f8e0276"). InnerVolumeSpecName "kube-api-access-rhdm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:46:01 crc kubenswrapper[4832]: I0131 05:46:01.840844 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10cbf1cd-b865-4f72-9714-d9a23f8e0276-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10cbf1cd-b865-4f72-9714-d9a23f8e0276" (UID: "10cbf1cd-b865-4f72-9714-d9a23f8e0276"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:46:01 crc kubenswrapper[4832]: I0131 05:46:01.910411 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10cbf1cd-b865-4f72-9714-d9a23f8e0276-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:46:01 crc kubenswrapper[4832]: I0131 05:46:01.910460 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhdm8\" (UniqueName: \"kubernetes.io/projected/10cbf1cd-b865-4f72-9714-d9a23f8e0276-kube-api-access-rhdm8\") on node \"crc\" DevicePath \"\"" Jan 31 05:46:01 crc kubenswrapper[4832]: I0131 05:46:01.910471 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10cbf1cd-b865-4f72-9714-d9a23f8e0276-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:46:02 crc kubenswrapper[4832]: I0131 05:46:02.281784 4832 generic.go:334] "Generic (PLEG): container finished" podID="10cbf1cd-b865-4f72-9714-d9a23f8e0276" containerID="29b3d27c143779934179b5d6d4bc4efa87713197bf45488d9fdeba13a8df91b8" exitCode=0 Jan 31 05:46:02 crc kubenswrapper[4832]: I0131 05:46:02.281853 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mtdq" event={"ID":"10cbf1cd-b865-4f72-9714-d9a23f8e0276","Type":"ContainerDied","Data":"29b3d27c143779934179b5d6d4bc4efa87713197bf45488d9fdeba13a8df91b8"} Jan 31 05:46:02 crc kubenswrapper[4832]: I0131 05:46:02.281896 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mtdq" event={"ID":"10cbf1cd-b865-4f72-9714-d9a23f8e0276","Type":"ContainerDied","Data":"8007ca26faa7d94c3c8b63921bb3426a38dbd9b55533442db95cd162fdae828b"} Jan 31 05:46:02 crc kubenswrapper[4832]: I0131 05:46:02.281919 4832 scope.go:117] "RemoveContainer" containerID="29b3d27c143779934179b5d6d4bc4efa87713197bf45488d9fdeba13a8df91b8" Jan 31 05:46:02 crc kubenswrapper[4832]: I0131 05:46:02.281922 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mtdq" Jan 31 05:46:02 crc kubenswrapper[4832]: I0131 05:46:02.310953 4832 scope.go:117] "RemoveContainer" containerID="0361b10eb9210597295717c6d9d6b14c767d10d5c6d49ca550f1847ac584be1e" Jan 31 05:46:02 crc kubenswrapper[4832]: I0131 05:46:02.318165 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mtdq"] Jan 31 05:46:02 crc kubenswrapper[4832]: I0131 05:46:02.329879 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mtdq"] Jan 31 05:46:02 crc kubenswrapper[4832]: I0131 05:46:02.537804 4832 scope.go:117] "RemoveContainer" containerID="aaa112726093db03ceaebf16d1cea4a854bb5819bc808d65e0c25d2dc2cad9cf" Jan 31 05:46:02 crc kubenswrapper[4832]: I0131 05:46:02.598128 4832 scope.go:117] "RemoveContainer" containerID="29b3d27c143779934179b5d6d4bc4efa87713197bf45488d9fdeba13a8df91b8" Jan 31 05:46:02 crc kubenswrapper[4832]: E0131 05:46:02.600288 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b3d27c143779934179b5d6d4bc4efa87713197bf45488d9fdeba13a8df91b8\": container with ID starting with 29b3d27c143779934179b5d6d4bc4efa87713197bf45488d9fdeba13a8df91b8 not found: ID does not exist" containerID="29b3d27c143779934179b5d6d4bc4efa87713197bf45488d9fdeba13a8df91b8" Jan 31 05:46:02 crc kubenswrapper[4832]: I0131 05:46:02.600648 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b3d27c143779934179b5d6d4bc4efa87713197bf45488d9fdeba13a8df91b8"} err="failed to get container status \"29b3d27c143779934179b5d6d4bc4efa87713197bf45488d9fdeba13a8df91b8\": rpc error: code = NotFound desc = could not find container \"29b3d27c143779934179b5d6d4bc4efa87713197bf45488d9fdeba13a8df91b8\": container with ID starting with 29b3d27c143779934179b5d6d4bc4efa87713197bf45488d9fdeba13a8df91b8 not found: ID does not exist" Jan 31 05:46:02 crc kubenswrapper[4832]: I0131 05:46:02.600701 4832 scope.go:117] "RemoveContainer" containerID="0361b10eb9210597295717c6d9d6b14c767d10d5c6d49ca550f1847ac584be1e" Jan 31 05:46:02 crc kubenswrapper[4832]: E0131 05:46:02.601184 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0361b10eb9210597295717c6d9d6b14c767d10d5c6d49ca550f1847ac584be1e\": container with ID starting with 0361b10eb9210597295717c6d9d6b14c767d10d5c6d49ca550f1847ac584be1e not found: ID does not exist" containerID="0361b10eb9210597295717c6d9d6b14c767d10d5c6d49ca550f1847ac584be1e" Jan 31 05:46:02 crc kubenswrapper[4832]: I0131 05:46:02.601280 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0361b10eb9210597295717c6d9d6b14c767d10d5c6d49ca550f1847ac584be1e"} err="failed to get container status \"0361b10eb9210597295717c6d9d6b14c767d10d5c6d49ca550f1847ac584be1e\": rpc error: code = NotFound desc = could not find container \"0361b10eb9210597295717c6d9d6b14c767d10d5c6d49ca550f1847ac584be1e\": container with ID starting with 0361b10eb9210597295717c6d9d6b14c767d10d5c6d49ca550f1847ac584be1e not found: ID does not exist" Jan 31 05:46:02 crc kubenswrapper[4832]: I0131 05:46:02.601319 4832 scope.go:117] "RemoveContainer" containerID="aaa112726093db03ceaebf16d1cea4a854bb5819bc808d65e0c25d2dc2cad9cf" Jan 31 05:46:02 crc kubenswrapper[4832]: E0131 05:46:02.601931 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaa112726093db03ceaebf16d1cea4a854bb5819bc808d65e0c25d2dc2cad9cf\": container with ID starting with aaa112726093db03ceaebf16d1cea4a854bb5819bc808d65e0c25d2dc2cad9cf not found: ID does not exist" containerID="aaa112726093db03ceaebf16d1cea4a854bb5819bc808d65e0c25d2dc2cad9cf" Jan 31 05:46:02 crc kubenswrapper[4832]: I0131 05:46:02.601972 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa112726093db03ceaebf16d1cea4a854bb5819bc808d65e0c25d2dc2cad9cf"} err="failed to get container status \"aaa112726093db03ceaebf16d1cea4a854bb5819bc808d65e0c25d2dc2cad9cf\": rpc error: code = NotFound desc = could not find container \"aaa112726093db03ceaebf16d1cea4a854bb5819bc808d65e0c25d2dc2cad9cf\": container with ID starting with aaa112726093db03ceaebf16d1cea4a854bb5819bc808d65e0c25d2dc2cad9cf not found: ID does not exist" Jan 31 05:46:03 crc kubenswrapper[4832]: I0131 05:46:03.868853 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10cbf1cd-b865-4f72-9714-d9a23f8e0276" path="/var/lib/kubelet/pods/10cbf1cd-b865-4f72-9714-d9a23f8e0276/volumes" Jan 31 05:46:08 crc kubenswrapper[4832]: I0131 05:46:08.860789 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:46:08 crc kubenswrapper[4832]: E0131 05:46:08.862340 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:46:23 crc kubenswrapper[4832]: I0131 05:46:23.859904 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:46:23 crc kubenswrapper[4832]: E0131 05:46:23.861237 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:46:26 crc kubenswrapper[4832]: I0131 05:46:26.770110 4832 scope.go:117] "RemoveContainer" containerID="954737604af00c89f8c7674a3ec9efe443ca4f472a4b604821011396612f7ad4" Jan 31 05:46:34 crc kubenswrapper[4832]: I0131 05:46:34.860245 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:46:34 crc kubenswrapper[4832]: E0131 05:46:34.861086 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:46:45 crc kubenswrapper[4832]: I0131 05:46:45.041536 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-7c7c54d8bf-w9s7x" podUID="9192a7c5-49bb-4fed-858e-0c14b96f1288" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 31 05:46:46 crc kubenswrapper[4832]: I0131 05:46:46.859688 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:46:46 crc kubenswrapper[4832]: E0131 05:46:46.860356 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.723328 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vkvqb"] Jan 31 05:46:55 crc kubenswrapper[4832]: E0131 05:46:55.724256 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10cbf1cd-b865-4f72-9714-d9a23f8e0276" containerName="registry-server" Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.724279 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="10cbf1cd-b865-4f72-9714-d9a23f8e0276" containerName="registry-server" Jan 31 05:46:55 crc kubenswrapper[4832]: E0131 05:46:55.724309 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10cbf1cd-b865-4f72-9714-d9a23f8e0276" containerName="extract-content" Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.724329 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="10cbf1cd-b865-4f72-9714-d9a23f8e0276" containerName="extract-content" Jan 31 05:46:55 crc kubenswrapper[4832]: E0131 05:46:55.724349 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10cbf1cd-b865-4f72-9714-d9a23f8e0276" containerName="extract-utilities" Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.724355 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="10cbf1cd-b865-4f72-9714-d9a23f8e0276" containerName="extract-utilities" Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.724636 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="10cbf1cd-b865-4f72-9714-d9a23f8e0276" containerName="registry-server" Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.725999 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.746800 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkvqb"] Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.768853 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff258f60-dbe0-40cc-b553-4781b1bb5885-utilities\") pod \"certified-operators-vkvqb\" (UID: \"ff258f60-dbe0-40cc-b553-4781b1bb5885\") " pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.768953 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff258f60-dbe0-40cc-b553-4781b1bb5885-catalog-content\") pod \"certified-operators-vkvqb\" (UID: \"ff258f60-dbe0-40cc-b553-4781b1bb5885\") " pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.769035 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df52t\" (UniqueName: \"kubernetes.io/projected/ff258f60-dbe0-40cc-b553-4781b1bb5885-kube-api-access-df52t\") pod \"certified-operators-vkvqb\" (UID: \"ff258f60-dbe0-40cc-b553-4781b1bb5885\") " pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.874151 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff258f60-dbe0-40cc-b553-4781b1bb5885-catalog-content\") pod \"certified-operators-vkvqb\" (UID: \"ff258f60-dbe0-40cc-b553-4781b1bb5885\") " pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.874510 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df52t\" (UniqueName: \"kubernetes.io/projected/ff258f60-dbe0-40cc-b553-4781b1bb5885-kube-api-access-df52t\") pod \"certified-operators-vkvqb\" (UID: \"ff258f60-dbe0-40cc-b553-4781b1bb5885\") " pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.874685 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff258f60-dbe0-40cc-b553-4781b1bb5885-utilities\") pod \"certified-operators-vkvqb\" (UID: \"ff258f60-dbe0-40cc-b553-4781b1bb5885\") " pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.875275 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff258f60-dbe0-40cc-b553-4781b1bb5885-utilities\") pod \"certified-operators-vkvqb\" (UID: \"ff258f60-dbe0-40cc-b553-4781b1bb5885\") " pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.875525 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff258f60-dbe0-40cc-b553-4781b1bb5885-catalog-content\") pod \"certified-operators-vkvqb\" (UID: \"ff258f60-dbe0-40cc-b553-4781b1bb5885\") " pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:46:55 crc kubenswrapper[4832]: I0131 05:46:55.900715 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df52t\" (UniqueName: \"kubernetes.io/projected/ff258f60-dbe0-40cc-b553-4781b1bb5885-kube-api-access-df52t\") pod \"certified-operators-vkvqb\" (UID: \"ff258f60-dbe0-40cc-b553-4781b1bb5885\") " pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:46:56 crc kubenswrapper[4832]: I0131 05:46:56.164467 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:46:56 crc kubenswrapper[4832]: I0131 05:46:56.456499 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkvqb"] Jan 31 05:46:56 crc kubenswrapper[4832]: I0131 05:46:56.828303 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkvqb" event={"ID":"ff258f60-dbe0-40cc-b553-4781b1bb5885","Type":"ContainerDied","Data":"9bca242497919b3e44c5e90e466df4dc9d81729ff9cbf4af442c1b442ef5f6b2"} Jan 31 05:46:56 crc kubenswrapper[4832]: I0131 05:46:56.828318 4832 generic.go:334] "Generic (PLEG): container finished" podID="ff258f60-dbe0-40cc-b553-4781b1bb5885" containerID="9bca242497919b3e44c5e90e466df4dc9d81729ff9cbf4af442c1b442ef5f6b2" exitCode=0 Jan 31 05:46:56 crc kubenswrapper[4832]: I0131 05:46:56.828401 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkvqb" event={"ID":"ff258f60-dbe0-40cc-b553-4781b1bb5885","Type":"ContainerStarted","Data":"ced41db478d6d0182bbcfe143b80aab13ba75e6d27af3c3fb4c4fbf95833c367"} Jan 31 05:46:57 crc kubenswrapper[4832]: I0131 05:46:57.860276 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:46:57 crc kubenswrapper[4832]: E0131 05:46:57.861190 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:46:58 crc kubenswrapper[4832]: I0131 05:46:58.851699 4832 generic.go:334] "Generic (PLEG): container finished" podID="ff258f60-dbe0-40cc-b553-4781b1bb5885" containerID="e8e0ac9e25dd3fb8d902d1f0cad73747740b5d5abc51b4325ff9ac26e7f94685" exitCode=0 Jan 31 05:46:58 crc kubenswrapper[4832]: I0131 05:46:58.851747 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkvqb" event={"ID":"ff258f60-dbe0-40cc-b553-4781b1bb5885","Type":"ContainerDied","Data":"e8e0ac9e25dd3fb8d902d1f0cad73747740b5d5abc51b4325ff9ac26e7f94685"} Jan 31 05:46:59 crc kubenswrapper[4832]: I0131 05:46:59.870176 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkvqb" event={"ID":"ff258f60-dbe0-40cc-b553-4781b1bb5885","Type":"ContainerStarted","Data":"f6513f21f48289ab29501b4ca8216a0214560a684f1bcf6720c13bb692d7256c"} Jan 31 05:46:59 crc kubenswrapper[4832]: I0131 05:46:59.890215 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vkvqb" podStartSLOduration=2.282854518 podStartE2EDuration="4.890184696s" podCreationTimestamp="2026-01-31 05:46:55 +0000 UTC" firstStartedPulling="2026-01-31 05:46:56.831713139 +0000 UTC m=+3825.780534824" lastFinishedPulling="2026-01-31 05:46:59.439043307 +0000 UTC m=+3828.387865002" observedRunningTime="2026-01-31 05:46:59.886363336 +0000 UTC m=+3828.835185021" watchObservedRunningTime="2026-01-31 05:46:59.890184696 +0000 UTC m=+3828.839006421" Jan 31 05:47:06 crc kubenswrapper[4832]: I0131 05:47:06.165631 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:47:06 crc kubenswrapper[4832]: I0131 05:47:06.166357 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:47:06 crc kubenswrapper[4832]: I0131 05:47:06.209649 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:47:06 crc kubenswrapper[4832]: I0131 05:47:06.979551 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:47:07 crc kubenswrapper[4832]: I0131 05:47:07.028063 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkvqb"] Jan 31 05:47:08 crc kubenswrapper[4832]: I0131 05:47:08.949486 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vkvqb" podUID="ff258f60-dbe0-40cc-b553-4781b1bb5885" containerName="registry-server" containerID="cri-o://f6513f21f48289ab29501b4ca8216a0214560a684f1bcf6720c13bb692d7256c" gracePeriod=2 Jan 31 05:47:09 crc kubenswrapper[4832]: I0131 05:47:09.860159 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:47:09 crc kubenswrapper[4832]: E0131 05:47:09.860678 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:47:10 crc kubenswrapper[4832]: I0131 05:47:10.753198 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:47:10 crc kubenswrapper[4832]: I0131 05:47:10.945739 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df52t\" (UniqueName: \"kubernetes.io/projected/ff258f60-dbe0-40cc-b553-4781b1bb5885-kube-api-access-df52t\") pod \"ff258f60-dbe0-40cc-b553-4781b1bb5885\" (UID: \"ff258f60-dbe0-40cc-b553-4781b1bb5885\") " Jan 31 05:47:10 crc kubenswrapper[4832]: I0131 05:47:10.946217 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff258f60-dbe0-40cc-b553-4781b1bb5885-utilities\") pod \"ff258f60-dbe0-40cc-b553-4781b1bb5885\" (UID: \"ff258f60-dbe0-40cc-b553-4781b1bb5885\") " Jan 31 05:47:10 crc kubenswrapper[4832]: I0131 05:47:10.947406 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff258f60-dbe0-40cc-b553-4781b1bb5885-utilities" (OuterVolumeSpecName: "utilities") pod "ff258f60-dbe0-40cc-b553-4781b1bb5885" (UID: "ff258f60-dbe0-40cc-b553-4781b1bb5885"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:47:10 crc kubenswrapper[4832]: I0131 05:47:10.947635 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff258f60-dbe0-40cc-b553-4781b1bb5885-catalog-content\") pod \"ff258f60-dbe0-40cc-b553-4781b1bb5885\" (UID: \"ff258f60-dbe0-40cc-b553-4781b1bb5885\") " Jan 31 05:47:10 crc kubenswrapper[4832]: I0131 05:47:10.949143 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff258f60-dbe0-40cc-b553-4781b1bb5885-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:47:10 crc kubenswrapper[4832]: I0131 05:47:10.961244 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff258f60-dbe0-40cc-b553-4781b1bb5885-kube-api-access-df52t" (OuterVolumeSpecName: "kube-api-access-df52t") pod "ff258f60-dbe0-40cc-b553-4781b1bb5885" (UID: "ff258f60-dbe0-40cc-b553-4781b1bb5885"). InnerVolumeSpecName "kube-api-access-df52t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:47:10 crc kubenswrapper[4832]: I0131 05:47:10.971157 4832 generic.go:334] "Generic (PLEG): container finished" podID="ff258f60-dbe0-40cc-b553-4781b1bb5885" containerID="f6513f21f48289ab29501b4ca8216a0214560a684f1bcf6720c13bb692d7256c" exitCode=0 Jan 31 05:47:10 crc kubenswrapper[4832]: I0131 05:47:10.971307 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkvqb" event={"ID":"ff258f60-dbe0-40cc-b553-4781b1bb5885","Type":"ContainerDied","Data":"f6513f21f48289ab29501b4ca8216a0214560a684f1bcf6720c13bb692d7256c"} Jan 31 05:47:10 crc kubenswrapper[4832]: I0131 05:47:10.971407 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkvqb" event={"ID":"ff258f60-dbe0-40cc-b553-4781b1bb5885","Type":"ContainerDied","Data":"ced41db478d6d0182bbcfe143b80aab13ba75e6d27af3c3fb4c4fbf95833c367"} Jan 31 05:47:10 crc kubenswrapper[4832]: I0131 05:47:10.971429 4832 scope.go:117] "RemoveContainer" containerID="f6513f21f48289ab29501b4ca8216a0214560a684f1bcf6720c13bb692d7256c" Jan 31 05:47:10 crc kubenswrapper[4832]: I0131 05:47:10.971359 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkvqb" Jan 31 05:47:10 crc kubenswrapper[4832]: I0131 05:47:10.999147 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff258f60-dbe0-40cc-b553-4781b1bb5885-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff258f60-dbe0-40cc-b553-4781b1bb5885" (UID: "ff258f60-dbe0-40cc-b553-4781b1bb5885"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:47:11 crc kubenswrapper[4832]: I0131 05:47:11.005523 4832 scope.go:117] "RemoveContainer" containerID="e8e0ac9e25dd3fb8d902d1f0cad73747740b5d5abc51b4325ff9ac26e7f94685" Jan 31 05:47:11 crc kubenswrapper[4832]: I0131 05:47:11.027486 4832 scope.go:117] "RemoveContainer" containerID="9bca242497919b3e44c5e90e466df4dc9d81729ff9cbf4af442c1b442ef5f6b2" Jan 31 05:47:11 crc kubenswrapper[4832]: I0131 05:47:11.050596 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df52t\" (UniqueName: \"kubernetes.io/projected/ff258f60-dbe0-40cc-b553-4781b1bb5885-kube-api-access-df52t\") on node \"crc\" DevicePath \"\"" Jan 31 05:47:11 crc kubenswrapper[4832]: I0131 05:47:11.050625 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff258f60-dbe0-40cc-b553-4781b1bb5885-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:47:11 crc kubenswrapper[4832]: I0131 05:47:11.067685 4832 scope.go:117] "RemoveContainer" containerID="f6513f21f48289ab29501b4ca8216a0214560a684f1bcf6720c13bb692d7256c" Jan 31 05:47:11 crc kubenswrapper[4832]: E0131 05:47:11.068086 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6513f21f48289ab29501b4ca8216a0214560a684f1bcf6720c13bb692d7256c\": container with ID starting with f6513f21f48289ab29501b4ca8216a0214560a684f1bcf6720c13bb692d7256c not found: ID does not exist" containerID="f6513f21f48289ab29501b4ca8216a0214560a684f1bcf6720c13bb692d7256c" Jan 31 05:47:11 crc kubenswrapper[4832]: I0131 05:47:11.068126 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6513f21f48289ab29501b4ca8216a0214560a684f1bcf6720c13bb692d7256c"} err="failed to get container status \"f6513f21f48289ab29501b4ca8216a0214560a684f1bcf6720c13bb692d7256c\": rpc error: code = NotFound desc = could not find container \"f6513f21f48289ab29501b4ca8216a0214560a684f1bcf6720c13bb692d7256c\": container with ID starting with f6513f21f48289ab29501b4ca8216a0214560a684f1bcf6720c13bb692d7256c not found: ID does not exist" Jan 31 05:47:11 crc kubenswrapper[4832]: I0131 05:47:11.068151 4832 scope.go:117] "RemoveContainer" containerID="e8e0ac9e25dd3fb8d902d1f0cad73747740b5d5abc51b4325ff9ac26e7f94685" Jan 31 05:47:11 crc kubenswrapper[4832]: E0131 05:47:11.068455 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8e0ac9e25dd3fb8d902d1f0cad73747740b5d5abc51b4325ff9ac26e7f94685\": container with ID starting with e8e0ac9e25dd3fb8d902d1f0cad73747740b5d5abc51b4325ff9ac26e7f94685 not found: ID does not exist" containerID="e8e0ac9e25dd3fb8d902d1f0cad73747740b5d5abc51b4325ff9ac26e7f94685" Jan 31 05:47:11 crc kubenswrapper[4832]: I0131 05:47:11.068487 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8e0ac9e25dd3fb8d902d1f0cad73747740b5d5abc51b4325ff9ac26e7f94685"} err="failed to get container status \"e8e0ac9e25dd3fb8d902d1f0cad73747740b5d5abc51b4325ff9ac26e7f94685\": rpc error: code = NotFound desc = could not find container \"e8e0ac9e25dd3fb8d902d1f0cad73747740b5d5abc51b4325ff9ac26e7f94685\": container with ID starting with e8e0ac9e25dd3fb8d902d1f0cad73747740b5d5abc51b4325ff9ac26e7f94685 not found: ID does not exist" Jan 31 05:47:11 crc kubenswrapper[4832]: I0131 05:47:11.068508 4832 scope.go:117] "RemoveContainer" containerID="9bca242497919b3e44c5e90e466df4dc9d81729ff9cbf4af442c1b442ef5f6b2" Jan 31 05:47:11 crc kubenswrapper[4832]: E0131 05:47:11.068808 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bca242497919b3e44c5e90e466df4dc9d81729ff9cbf4af442c1b442ef5f6b2\": container with ID starting with 9bca242497919b3e44c5e90e466df4dc9d81729ff9cbf4af442c1b442ef5f6b2 not found: ID does not exist" containerID="9bca242497919b3e44c5e90e466df4dc9d81729ff9cbf4af442c1b442ef5f6b2" Jan 31 05:47:11 crc kubenswrapper[4832]: I0131 05:47:11.068835 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bca242497919b3e44c5e90e466df4dc9d81729ff9cbf4af442c1b442ef5f6b2"} err="failed to get container status \"9bca242497919b3e44c5e90e466df4dc9d81729ff9cbf4af442c1b442ef5f6b2\": rpc error: code = NotFound desc = could not find container \"9bca242497919b3e44c5e90e466df4dc9d81729ff9cbf4af442c1b442ef5f6b2\": container with ID starting with 9bca242497919b3e44c5e90e466df4dc9d81729ff9cbf4af442c1b442ef5f6b2 not found: ID does not exist" Jan 31 05:47:11 crc kubenswrapper[4832]: I0131 05:47:11.303922 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkvqb"] Jan 31 05:47:11 crc kubenswrapper[4832]: I0131 05:47:11.312632 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vkvqb"] Jan 31 05:47:11 crc kubenswrapper[4832]: I0131 05:47:11.872939 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff258f60-dbe0-40cc-b553-4781b1bb5885" path="/var/lib/kubelet/pods/ff258f60-dbe0-40cc-b553-4781b1bb5885/volumes" Jan 31 05:47:24 crc kubenswrapper[4832]: I0131 05:47:24.860095 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:47:24 crc kubenswrapper[4832]: E0131 05:47:24.860942 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:47:26 crc kubenswrapper[4832]: I0131 05:47:26.840442 4832 scope.go:117] "RemoveContainer" containerID="11ef22b418c5cbfec238a10ce5119615149fed210eac56ba466c24aab9e228e3" Jan 31 05:47:37 crc kubenswrapper[4832]: I0131 05:47:37.859165 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:47:37 crc kubenswrapper[4832]: E0131 05:47:37.859953 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:47:48 crc kubenswrapper[4832]: I0131 05:47:48.860126 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:47:48 crc kubenswrapper[4832]: E0131 05:47:48.860938 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:47:59 crc kubenswrapper[4832]: I0131 05:47:59.860771 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:47:59 crc kubenswrapper[4832]: E0131 05:47:59.861542 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.443933 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b8n6x/must-gather-wg4rj"] Jan 31 05:48:10 crc kubenswrapper[4832]: E0131 05:48:10.444970 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff258f60-dbe0-40cc-b553-4781b1bb5885" containerName="extract-utilities" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.444989 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff258f60-dbe0-40cc-b553-4781b1bb5885" containerName="extract-utilities" Jan 31 05:48:10 crc kubenswrapper[4832]: E0131 05:48:10.445018 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff258f60-dbe0-40cc-b553-4781b1bb5885" containerName="registry-server" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.445025 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff258f60-dbe0-40cc-b553-4781b1bb5885" containerName="registry-server" Jan 31 05:48:10 crc kubenswrapper[4832]: E0131 05:48:10.445040 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff258f60-dbe0-40cc-b553-4781b1bb5885" containerName="extract-content" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.445047 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff258f60-dbe0-40cc-b553-4781b1bb5885" containerName="extract-content" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.445307 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff258f60-dbe0-40cc-b553-4781b1bb5885" containerName="registry-server" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.446622 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/must-gather-wg4rj" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.449195 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-b8n6x"/"default-dockercfg-kqzgs" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.450549 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-b8n6x"/"openshift-service-ca.crt" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.450861 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-b8n6x"/"kube-root-ca.crt" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.458945 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b8n6x/must-gather-wg4rj"] Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.561998 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/34ea1ae5-6f7e-43c9-9b89-2458144c2d2c-must-gather-output\") pod \"must-gather-wg4rj\" (UID: \"34ea1ae5-6f7e-43c9-9b89-2458144c2d2c\") " pod="openshift-must-gather-b8n6x/must-gather-wg4rj" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.562251 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b86cz\" (UniqueName: \"kubernetes.io/projected/34ea1ae5-6f7e-43c9-9b89-2458144c2d2c-kube-api-access-b86cz\") pod \"must-gather-wg4rj\" (UID: \"34ea1ae5-6f7e-43c9-9b89-2458144c2d2c\") " pod="openshift-must-gather-b8n6x/must-gather-wg4rj" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.665322 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/34ea1ae5-6f7e-43c9-9b89-2458144c2d2c-must-gather-output\") pod \"must-gather-wg4rj\" (UID: \"34ea1ae5-6f7e-43c9-9b89-2458144c2d2c\") " pod="openshift-must-gather-b8n6x/must-gather-wg4rj" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.665423 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b86cz\" (UniqueName: \"kubernetes.io/projected/34ea1ae5-6f7e-43c9-9b89-2458144c2d2c-kube-api-access-b86cz\") pod \"must-gather-wg4rj\" (UID: \"34ea1ae5-6f7e-43c9-9b89-2458144c2d2c\") " pod="openshift-must-gather-b8n6x/must-gather-wg4rj" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.665829 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/34ea1ae5-6f7e-43c9-9b89-2458144c2d2c-must-gather-output\") pod \"must-gather-wg4rj\" (UID: \"34ea1ae5-6f7e-43c9-9b89-2458144c2d2c\") " pod="openshift-must-gather-b8n6x/must-gather-wg4rj" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.689488 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b86cz\" (UniqueName: \"kubernetes.io/projected/34ea1ae5-6f7e-43c9-9b89-2458144c2d2c-kube-api-access-b86cz\") pod \"must-gather-wg4rj\" (UID: \"34ea1ae5-6f7e-43c9-9b89-2458144c2d2c\") " pod="openshift-must-gather-b8n6x/must-gather-wg4rj" Jan 31 05:48:10 crc kubenswrapper[4832]: I0131 05:48:10.773326 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/must-gather-wg4rj" Jan 31 05:48:11 crc kubenswrapper[4832]: I0131 05:48:11.266143 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b8n6x/must-gather-wg4rj"] Jan 31 05:48:11 crc kubenswrapper[4832]: I0131 05:48:11.578305 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b8n6x/must-gather-wg4rj" event={"ID":"34ea1ae5-6f7e-43c9-9b89-2458144c2d2c","Type":"ContainerStarted","Data":"f82f97d0753823692d5e89c917bde7bf41a89faea6e2ba6dbce26e49be25e183"} Jan 31 05:48:11 crc kubenswrapper[4832]: I0131 05:48:11.578350 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b8n6x/must-gather-wg4rj" event={"ID":"34ea1ae5-6f7e-43c9-9b89-2458144c2d2c","Type":"ContainerStarted","Data":"f18cbedda3d90a6992cb0e289924a639f865f0f9f09f93bed05217dcf9a1a3f6"} Jan 31 05:48:12 crc kubenswrapper[4832]: I0131 05:48:12.588479 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b8n6x/must-gather-wg4rj" event={"ID":"34ea1ae5-6f7e-43c9-9b89-2458144c2d2c","Type":"ContainerStarted","Data":"b8dcd42b99c6e8d7c1c4d342dd4cc3b5239af1ccaa23e61d4577721a1b28273c"} Jan 31 05:48:12 crc kubenswrapper[4832]: I0131 05:48:12.611981 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-b8n6x/must-gather-wg4rj" podStartSLOduration=2.61196075 podStartE2EDuration="2.61196075s" podCreationTimestamp="2026-01-31 05:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:48:12.605343684 +0000 UTC m=+3901.554165389" watchObservedRunningTime="2026-01-31 05:48:12.61196075 +0000 UTC m=+3901.560782425" Jan 31 05:48:13 crc kubenswrapper[4832]: I0131 05:48:13.860795 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:48:13 crc kubenswrapper[4832]: E0131 05:48:13.861642 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:48:15 crc kubenswrapper[4832]: I0131 05:48:15.286621 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b8n6x/crc-debug-lvw5l"] Jan 31 05:48:15 crc kubenswrapper[4832]: I0131 05:48:15.289405 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/crc-debug-lvw5l" Jan 31 05:48:15 crc kubenswrapper[4832]: I0131 05:48:15.467593 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/661318e6-4764-43dd-89d4-ae376f1effb6-host\") pod \"crc-debug-lvw5l\" (UID: \"661318e6-4764-43dd-89d4-ae376f1effb6\") " pod="openshift-must-gather-b8n6x/crc-debug-lvw5l" Jan 31 05:48:15 crc kubenswrapper[4832]: I0131 05:48:15.467686 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdx59\" (UniqueName: \"kubernetes.io/projected/661318e6-4764-43dd-89d4-ae376f1effb6-kube-api-access-zdx59\") pod \"crc-debug-lvw5l\" (UID: \"661318e6-4764-43dd-89d4-ae376f1effb6\") " pod="openshift-must-gather-b8n6x/crc-debug-lvw5l" Jan 31 05:48:15 crc kubenswrapper[4832]: I0131 05:48:15.571220 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/661318e6-4764-43dd-89d4-ae376f1effb6-host\") pod \"crc-debug-lvw5l\" (UID: \"661318e6-4764-43dd-89d4-ae376f1effb6\") " pod="openshift-must-gather-b8n6x/crc-debug-lvw5l" Jan 31 05:48:15 crc kubenswrapper[4832]: I0131 05:48:15.571319 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdx59\" (UniqueName: \"kubernetes.io/projected/661318e6-4764-43dd-89d4-ae376f1effb6-kube-api-access-zdx59\") pod \"crc-debug-lvw5l\" (UID: \"661318e6-4764-43dd-89d4-ae376f1effb6\") " pod="openshift-must-gather-b8n6x/crc-debug-lvw5l" Jan 31 05:48:15 crc kubenswrapper[4832]: I0131 05:48:15.571423 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/661318e6-4764-43dd-89d4-ae376f1effb6-host\") pod \"crc-debug-lvw5l\" (UID: \"661318e6-4764-43dd-89d4-ae376f1effb6\") " pod="openshift-must-gather-b8n6x/crc-debug-lvw5l" Jan 31 05:48:15 crc kubenswrapper[4832]: I0131 05:48:15.596526 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdx59\" (UniqueName: \"kubernetes.io/projected/661318e6-4764-43dd-89d4-ae376f1effb6-kube-api-access-zdx59\") pod \"crc-debug-lvw5l\" (UID: \"661318e6-4764-43dd-89d4-ae376f1effb6\") " pod="openshift-must-gather-b8n6x/crc-debug-lvw5l" Jan 31 05:48:15 crc kubenswrapper[4832]: I0131 05:48:15.618949 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/crc-debug-lvw5l" Jan 31 05:48:15 crc kubenswrapper[4832]: W0131 05:48:15.661897 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod661318e6_4764_43dd_89d4_ae376f1effb6.slice/crio-2dbaa94238cfc7234342f5affb329103be21870bc4ef075e5a0e5cfcd8410b80 WatchSource:0}: Error finding container 2dbaa94238cfc7234342f5affb329103be21870bc4ef075e5a0e5cfcd8410b80: Status 404 returned error can't find the container with id 2dbaa94238cfc7234342f5affb329103be21870bc4ef075e5a0e5cfcd8410b80 Jan 31 05:48:16 crc kubenswrapper[4832]: I0131 05:48:16.626639 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b8n6x/crc-debug-lvw5l" event={"ID":"661318e6-4764-43dd-89d4-ae376f1effb6","Type":"ContainerStarted","Data":"0b0a51117bbc26eff24fdc64781bfb4721608da316a90eb965d82ae232af7719"} Jan 31 05:48:16 crc kubenswrapper[4832]: I0131 05:48:16.627752 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b8n6x/crc-debug-lvw5l" event={"ID":"661318e6-4764-43dd-89d4-ae376f1effb6","Type":"ContainerStarted","Data":"2dbaa94238cfc7234342f5affb329103be21870bc4ef075e5a0e5cfcd8410b80"} Jan 31 05:48:16 crc kubenswrapper[4832]: I0131 05:48:16.659161 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-b8n6x/crc-debug-lvw5l" podStartSLOduration=1.659142455 podStartE2EDuration="1.659142455s" podCreationTimestamp="2026-01-31 05:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:48:16.644274291 +0000 UTC m=+3905.593095986" watchObservedRunningTime="2026-01-31 05:48:16.659142455 +0000 UTC m=+3905.607964140" Jan 31 05:48:26 crc kubenswrapper[4832]: I0131 05:48:26.804507 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2fnjf"] Jan 31 05:48:26 crc kubenswrapper[4832]: I0131 05:48:26.807717 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:26 crc kubenswrapper[4832]: I0131 05:48:26.816934 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2fnjf"] Jan 31 05:48:26 crc kubenswrapper[4832]: I0131 05:48:26.860159 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:48:26 crc kubenswrapper[4832]: E0131 05:48:26.860826 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:48:26 crc kubenswrapper[4832]: I0131 05:48:26.899933 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-utilities\") pod \"community-operators-2fnjf\" (UID: \"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e\") " pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:26 crc kubenswrapper[4832]: I0131 05:48:26.899996 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-catalog-content\") pod \"community-operators-2fnjf\" (UID: \"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e\") " pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:26 crc kubenswrapper[4832]: I0131 05:48:26.900022 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lzzn\" (UniqueName: \"kubernetes.io/projected/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-kube-api-access-6lzzn\") pod \"community-operators-2fnjf\" (UID: \"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e\") " pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:27 crc kubenswrapper[4832]: I0131 05:48:27.001933 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-utilities\") pod \"community-operators-2fnjf\" (UID: \"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e\") " pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:27 crc kubenswrapper[4832]: I0131 05:48:27.002410 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-utilities\") pod \"community-operators-2fnjf\" (UID: \"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e\") " pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:27 crc kubenswrapper[4832]: I0131 05:48:27.002458 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-catalog-content\") pod \"community-operators-2fnjf\" (UID: \"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e\") " pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:27 crc kubenswrapper[4832]: I0131 05:48:27.002485 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lzzn\" (UniqueName: \"kubernetes.io/projected/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-kube-api-access-6lzzn\") pod \"community-operators-2fnjf\" (UID: \"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e\") " pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:27 crc kubenswrapper[4832]: I0131 05:48:27.002751 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-catalog-content\") pod \"community-operators-2fnjf\" (UID: \"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e\") " pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:27 crc kubenswrapper[4832]: I0131 05:48:27.026940 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lzzn\" (UniqueName: \"kubernetes.io/projected/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-kube-api-access-6lzzn\") pod \"community-operators-2fnjf\" (UID: \"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e\") " pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:27 crc kubenswrapper[4832]: I0131 05:48:27.135092 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:27 crc kubenswrapper[4832]: I0131 05:48:27.910526 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2fnjf"] Jan 31 05:48:28 crc kubenswrapper[4832]: I0131 05:48:28.733191 4832 generic.go:334] "Generic (PLEG): container finished" podID="87d5a0d0-bb0e-4e2c-bfcc-83921a52745e" containerID="c215ed5b51bfa49aeaf41e2652762dafdc23c8bc4629023bce51455cb1d78448" exitCode=0 Jan 31 05:48:28 crc kubenswrapper[4832]: I0131 05:48:28.733380 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fnjf" event={"ID":"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e","Type":"ContainerDied","Data":"c215ed5b51bfa49aeaf41e2652762dafdc23c8bc4629023bce51455cb1d78448"} Jan 31 05:48:28 crc kubenswrapper[4832]: I0131 05:48:28.733518 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fnjf" event={"ID":"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e","Type":"ContainerStarted","Data":"70ddfd9d73a26cdef43eb3c86388eea0a6f0335bf94807100e459cc968d5a212"} Jan 31 05:48:29 crc kubenswrapper[4832]: I0131 05:48:29.784891 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fnjf" event={"ID":"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e","Type":"ContainerStarted","Data":"27f34bb8f9451ad0aafdae12c2da16ef57d43f65591634105acfc29a20333d46"} Jan 31 05:48:30 crc kubenswrapper[4832]: I0131 05:48:30.799093 4832 generic.go:334] "Generic (PLEG): container finished" podID="87d5a0d0-bb0e-4e2c-bfcc-83921a52745e" containerID="27f34bb8f9451ad0aafdae12c2da16ef57d43f65591634105acfc29a20333d46" exitCode=0 Jan 31 05:48:30 crc kubenswrapper[4832]: I0131 05:48:30.799181 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fnjf" event={"ID":"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e","Type":"ContainerDied","Data":"27f34bb8f9451ad0aafdae12c2da16ef57d43f65591634105acfc29a20333d46"} Jan 31 05:48:31 crc kubenswrapper[4832]: I0131 05:48:31.809216 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fnjf" event={"ID":"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e","Type":"ContainerStarted","Data":"82ad698cd5a91562cf4657974a75931f939bb487322456e5db4d043f033d02d5"} Jan 31 05:48:31 crc kubenswrapper[4832]: I0131 05:48:31.895898 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2fnjf" podStartSLOduration=3.4576013 podStartE2EDuration="5.895875609s" podCreationTimestamp="2026-01-31 05:48:26 +0000 UTC" firstStartedPulling="2026-01-31 05:48:28.734806767 +0000 UTC m=+3917.683628452" lastFinishedPulling="2026-01-31 05:48:31.173081076 +0000 UTC m=+3920.121902761" observedRunningTime="2026-01-31 05:48:31.892973049 +0000 UTC m=+3920.841794734" watchObservedRunningTime="2026-01-31 05:48:31.895875609 +0000 UTC m=+3920.844697294" Jan 31 05:48:37 crc kubenswrapper[4832]: I0131 05:48:37.135890 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:37 crc kubenswrapper[4832]: I0131 05:48:37.136449 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:37 crc kubenswrapper[4832]: I0131 05:48:37.209843 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:37 crc kubenswrapper[4832]: I0131 05:48:37.960279 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:38 crc kubenswrapper[4832]: I0131 05:48:38.036546 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2fnjf"] Jan 31 05:48:39 crc kubenswrapper[4832]: I0131 05:48:39.860777 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:48:39 crc kubenswrapper[4832]: E0131 05:48:39.861551 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:48:39 crc kubenswrapper[4832]: I0131 05:48:39.909902 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2fnjf" podUID="87d5a0d0-bb0e-4e2c-bfcc-83921a52745e" containerName="registry-server" containerID="cri-o://82ad698cd5a91562cf4657974a75931f939bb487322456e5db4d043f033d02d5" gracePeriod=2 Jan 31 05:48:40 crc kubenswrapper[4832]: I0131 05:48:40.438870 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:40 crc kubenswrapper[4832]: I0131 05:48:40.596833 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lzzn\" (UniqueName: \"kubernetes.io/projected/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-kube-api-access-6lzzn\") pod \"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e\" (UID: \"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e\") " Jan 31 05:48:40 crc kubenswrapper[4832]: I0131 05:48:40.597081 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-catalog-content\") pod \"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e\" (UID: \"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e\") " Jan 31 05:48:40 crc kubenswrapper[4832]: I0131 05:48:40.597316 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-utilities\") pod \"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e\" (UID: \"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e\") " Jan 31 05:48:40 crc kubenswrapper[4832]: I0131 05:48:40.598888 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-utilities" (OuterVolumeSpecName: "utilities") pod "87d5a0d0-bb0e-4e2c-bfcc-83921a52745e" (UID: "87d5a0d0-bb0e-4e2c-bfcc-83921a52745e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:48:40 crc kubenswrapper[4832]: I0131 05:48:40.604093 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-kube-api-access-6lzzn" (OuterVolumeSpecName: "kube-api-access-6lzzn") pod "87d5a0d0-bb0e-4e2c-bfcc-83921a52745e" (UID: "87d5a0d0-bb0e-4e2c-bfcc-83921a52745e"). InnerVolumeSpecName "kube-api-access-6lzzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:48:40 crc kubenswrapper[4832]: I0131 05:48:40.699400 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:48:40 crc kubenswrapper[4832]: I0131 05:48:40.699446 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lzzn\" (UniqueName: \"kubernetes.io/projected/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-kube-api-access-6lzzn\") on node \"crc\" DevicePath \"\"" Jan 31 05:48:40 crc kubenswrapper[4832]: I0131 05:48:40.920362 4832 generic.go:334] "Generic (PLEG): container finished" podID="87d5a0d0-bb0e-4e2c-bfcc-83921a52745e" containerID="82ad698cd5a91562cf4657974a75931f939bb487322456e5db4d043f033d02d5" exitCode=0 Jan 31 05:48:40 crc kubenswrapper[4832]: I0131 05:48:40.921396 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fnjf" event={"ID":"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e","Type":"ContainerDied","Data":"82ad698cd5a91562cf4657974a75931f939bb487322456e5db4d043f033d02d5"} Jan 31 05:48:40 crc kubenswrapper[4832]: I0131 05:48:40.921500 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2fnjf" event={"ID":"87d5a0d0-bb0e-4e2c-bfcc-83921a52745e","Type":"ContainerDied","Data":"70ddfd9d73a26cdef43eb3c86388eea0a6f0335bf94807100e459cc968d5a212"} Jan 31 05:48:40 crc kubenswrapper[4832]: I0131 05:48:40.921637 4832 scope.go:117] "RemoveContainer" containerID="82ad698cd5a91562cf4657974a75931f939bb487322456e5db4d043f033d02d5" Jan 31 05:48:40 crc kubenswrapper[4832]: I0131 05:48:40.921822 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2fnjf" Jan 31 05:48:40 crc kubenswrapper[4832]: I0131 05:48:40.942704 4832 scope.go:117] "RemoveContainer" containerID="27f34bb8f9451ad0aafdae12c2da16ef57d43f65591634105acfc29a20333d46" Jan 31 05:48:40 crc kubenswrapper[4832]: I0131 05:48:40.965170 4832 scope.go:117] "RemoveContainer" containerID="c215ed5b51bfa49aeaf41e2652762dafdc23c8bc4629023bce51455cb1d78448" Jan 31 05:48:41 crc kubenswrapper[4832]: I0131 05:48:41.016418 4832 scope.go:117] "RemoveContainer" containerID="82ad698cd5a91562cf4657974a75931f939bb487322456e5db4d043f033d02d5" Jan 31 05:48:41 crc kubenswrapper[4832]: E0131 05:48:41.017136 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ad698cd5a91562cf4657974a75931f939bb487322456e5db4d043f033d02d5\": container with ID starting with 82ad698cd5a91562cf4657974a75931f939bb487322456e5db4d043f033d02d5 not found: ID does not exist" containerID="82ad698cd5a91562cf4657974a75931f939bb487322456e5db4d043f033d02d5" Jan 31 05:48:41 crc kubenswrapper[4832]: I0131 05:48:41.017238 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ad698cd5a91562cf4657974a75931f939bb487322456e5db4d043f033d02d5"} err="failed to get container status \"82ad698cd5a91562cf4657974a75931f939bb487322456e5db4d043f033d02d5\": rpc error: code = NotFound desc = could not find container \"82ad698cd5a91562cf4657974a75931f939bb487322456e5db4d043f033d02d5\": container with ID starting with 82ad698cd5a91562cf4657974a75931f939bb487322456e5db4d043f033d02d5 not found: ID does not exist" Jan 31 05:48:41 crc kubenswrapper[4832]: I0131 05:48:41.017321 4832 scope.go:117] "RemoveContainer" containerID="27f34bb8f9451ad0aafdae12c2da16ef57d43f65591634105acfc29a20333d46" Jan 31 05:48:41 crc kubenswrapper[4832]: E0131 05:48:41.017785 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27f34bb8f9451ad0aafdae12c2da16ef57d43f65591634105acfc29a20333d46\": container with ID starting with 27f34bb8f9451ad0aafdae12c2da16ef57d43f65591634105acfc29a20333d46 not found: ID does not exist" containerID="27f34bb8f9451ad0aafdae12c2da16ef57d43f65591634105acfc29a20333d46" Jan 31 05:48:41 crc kubenswrapper[4832]: I0131 05:48:41.017866 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f34bb8f9451ad0aafdae12c2da16ef57d43f65591634105acfc29a20333d46"} err="failed to get container status \"27f34bb8f9451ad0aafdae12c2da16ef57d43f65591634105acfc29a20333d46\": rpc error: code = NotFound desc = could not find container \"27f34bb8f9451ad0aafdae12c2da16ef57d43f65591634105acfc29a20333d46\": container with ID starting with 27f34bb8f9451ad0aafdae12c2da16ef57d43f65591634105acfc29a20333d46 not found: ID does not exist" Jan 31 05:48:41 crc kubenswrapper[4832]: I0131 05:48:41.017948 4832 scope.go:117] "RemoveContainer" containerID="c215ed5b51bfa49aeaf41e2652762dafdc23c8bc4629023bce51455cb1d78448" Jan 31 05:48:41 crc kubenswrapper[4832]: E0131 05:48:41.018450 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c215ed5b51bfa49aeaf41e2652762dafdc23c8bc4629023bce51455cb1d78448\": container with ID starting with c215ed5b51bfa49aeaf41e2652762dafdc23c8bc4629023bce51455cb1d78448 not found: ID does not exist" containerID="c215ed5b51bfa49aeaf41e2652762dafdc23c8bc4629023bce51455cb1d78448" Jan 31 05:48:41 crc kubenswrapper[4832]: I0131 05:48:41.018504 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c215ed5b51bfa49aeaf41e2652762dafdc23c8bc4629023bce51455cb1d78448"} err="failed to get container status \"c215ed5b51bfa49aeaf41e2652762dafdc23c8bc4629023bce51455cb1d78448\": rpc error: code = NotFound desc = could not find container \"c215ed5b51bfa49aeaf41e2652762dafdc23c8bc4629023bce51455cb1d78448\": container with ID starting with c215ed5b51bfa49aeaf41e2652762dafdc23c8bc4629023bce51455cb1d78448 not found: ID does not exist" Jan 31 05:48:41 crc kubenswrapper[4832]: I0131 05:48:41.453888 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87d5a0d0-bb0e-4e2c-bfcc-83921a52745e" (UID: "87d5a0d0-bb0e-4e2c-bfcc-83921a52745e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:48:41 crc kubenswrapper[4832]: I0131 05:48:41.514482 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:48:41 crc kubenswrapper[4832]: I0131 05:48:41.553155 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2fnjf"] Jan 31 05:48:41 crc kubenswrapper[4832]: I0131 05:48:41.565046 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2fnjf"] Jan 31 05:48:41 crc kubenswrapper[4832]: I0131 05:48:41.871402 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87d5a0d0-bb0e-4e2c-bfcc-83921a52745e" path="/var/lib/kubelet/pods/87d5a0d0-bb0e-4e2c-bfcc-83921a52745e/volumes" Jan 31 05:48:52 crc kubenswrapper[4832]: I0131 05:48:52.860087 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:48:54 crc kubenswrapper[4832]: I0131 05:48:54.034686 4832 generic.go:334] "Generic (PLEG): container finished" podID="661318e6-4764-43dd-89d4-ae376f1effb6" containerID="0b0a51117bbc26eff24fdc64781bfb4721608da316a90eb965d82ae232af7719" exitCode=0 Jan 31 05:48:54 crc kubenswrapper[4832]: I0131 05:48:54.035541 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b8n6x/crc-debug-lvw5l" event={"ID":"661318e6-4764-43dd-89d4-ae376f1effb6","Type":"ContainerDied","Data":"0b0a51117bbc26eff24fdc64781bfb4721608da316a90eb965d82ae232af7719"} Jan 31 05:48:54 crc kubenswrapper[4832]: I0131 05:48:54.039752 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"23b71de00d0cbef4cecaa660543227a2bad0a75eb2e3690350e57d0d4fde275d"} Jan 31 05:48:55 crc kubenswrapper[4832]: I0131 05:48:55.184274 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/crc-debug-lvw5l" Jan 31 05:48:55 crc kubenswrapper[4832]: I0131 05:48:55.224175 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-b8n6x/crc-debug-lvw5l"] Jan 31 05:48:55 crc kubenswrapper[4832]: I0131 05:48:55.236082 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-b8n6x/crc-debug-lvw5l"] Jan 31 05:48:55 crc kubenswrapper[4832]: I0131 05:48:55.287800 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdx59\" (UniqueName: \"kubernetes.io/projected/661318e6-4764-43dd-89d4-ae376f1effb6-kube-api-access-zdx59\") pod \"661318e6-4764-43dd-89d4-ae376f1effb6\" (UID: \"661318e6-4764-43dd-89d4-ae376f1effb6\") " Jan 31 05:48:55 crc kubenswrapper[4832]: I0131 05:48:55.287862 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/661318e6-4764-43dd-89d4-ae376f1effb6-host\") pod \"661318e6-4764-43dd-89d4-ae376f1effb6\" (UID: \"661318e6-4764-43dd-89d4-ae376f1effb6\") " Jan 31 05:48:55 crc kubenswrapper[4832]: I0131 05:48:55.288336 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/661318e6-4764-43dd-89d4-ae376f1effb6-host" (OuterVolumeSpecName: "host") pod "661318e6-4764-43dd-89d4-ae376f1effb6" (UID: "661318e6-4764-43dd-89d4-ae376f1effb6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:48:55 crc kubenswrapper[4832]: I0131 05:48:55.308753 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/661318e6-4764-43dd-89d4-ae376f1effb6-kube-api-access-zdx59" (OuterVolumeSpecName: "kube-api-access-zdx59") pod "661318e6-4764-43dd-89d4-ae376f1effb6" (UID: "661318e6-4764-43dd-89d4-ae376f1effb6"). InnerVolumeSpecName "kube-api-access-zdx59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:48:55 crc kubenswrapper[4832]: I0131 05:48:55.390206 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdx59\" (UniqueName: \"kubernetes.io/projected/661318e6-4764-43dd-89d4-ae376f1effb6-kube-api-access-zdx59\") on node \"crc\" DevicePath \"\"" Jan 31 05:48:55 crc kubenswrapper[4832]: I0131 05:48:55.390424 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/661318e6-4764-43dd-89d4-ae376f1effb6-host\") on node \"crc\" DevicePath \"\"" Jan 31 05:48:55 crc kubenswrapper[4832]: I0131 05:48:55.869782 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="661318e6-4764-43dd-89d4-ae376f1effb6" path="/var/lib/kubelet/pods/661318e6-4764-43dd-89d4-ae376f1effb6/volumes" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.060125 4832 scope.go:117] "RemoveContainer" containerID="0b0a51117bbc26eff24fdc64781bfb4721608da316a90eb965d82ae232af7719" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.060188 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/crc-debug-lvw5l" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.450102 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b8n6x/crc-debug-98wtp"] Jan 31 05:48:56 crc kubenswrapper[4832]: E0131 05:48:56.450766 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d5a0d0-bb0e-4e2c-bfcc-83921a52745e" containerName="extract-content" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.450782 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d5a0d0-bb0e-4e2c-bfcc-83921a52745e" containerName="extract-content" Jan 31 05:48:56 crc kubenswrapper[4832]: E0131 05:48:56.450794 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d5a0d0-bb0e-4e2c-bfcc-83921a52745e" containerName="registry-server" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.450800 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d5a0d0-bb0e-4e2c-bfcc-83921a52745e" containerName="registry-server" Jan 31 05:48:56 crc kubenswrapper[4832]: E0131 05:48:56.450813 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661318e6-4764-43dd-89d4-ae376f1effb6" containerName="container-00" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.450819 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="661318e6-4764-43dd-89d4-ae376f1effb6" containerName="container-00" Jan 31 05:48:56 crc kubenswrapper[4832]: E0131 05:48:56.450832 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d5a0d0-bb0e-4e2c-bfcc-83921a52745e" containerName="extract-utilities" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.450838 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d5a0d0-bb0e-4e2c-bfcc-83921a52745e" containerName="extract-utilities" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.451339 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="661318e6-4764-43dd-89d4-ae376f1effb6" containerName="container-00" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.452151 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d5a0d0-bb0e-4e2c-bfcc-83921a52745e" containerName="registry-server" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.452999 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/crc-debug-98wtp" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.518931 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhqck\" (UniqueName: \"kubernetes.io/projected/200d9929-61ac-448c-9a4b-687f06b7b81b-kube-api-access-vhqck\") pod \"crc-debug-98wtp\" (UID: \"200d9929-61ac-448c-9a4b-687f06b7b81b\") " pod="openshift-must-gather-b8n6x/crc-debug-98wtp" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.519049 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/200d9929-61ac-448c-9a4b-687f06b7b81b-host\") pod \"crc-debug-98wtp\" (UID: \"200d9929-61ac-448c-9a4b-687f06b7b81b\") " pod="openshift-must-gather-b8n6x/crc-debug-98wtp" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.620726 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhqck\" (UniqueName: \"kubernetes.io/projected/200d9929-61ac-448c-9a4b-687f06b7b81b-kube-api-access-vhqck\") pod \"crc-debug-98wtp\" (UID: \"200d9929-61ac-448c-9a4b-687f06b7b81b\") " pod="openshift-must-gather-b8n6x/crc-debug-98wtp" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.620885 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/200d9929-61ac-448c-9a4b-687f06b7b81b-host\") pod \"crc-debug-98wtp\" (UID: \"200d9929-61ac-448c-9a4b-687f06b7b81b\") " pod="openshift-must-gather-b8n6x/crc-debug-98wtp" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.621018 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/200d9929-61ac-448c-9a4b-687f06b7b81b-host\") pod \"crc-debug-98wtp\" (UID: \"200d9929-61ac-448c-9a4b-687f06b7b81b\") " pod="openshift-must-gather-b8n6x/crc-debug-98wtp" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.647537 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhqck\" (UniqueName: \"kubernetes.io/projected/200d9929-61ac-448c-9a4b-687f06b7b81b-kube-api-access-vhqck\") pod \"crc-debug-98wtp\" (UID: \"200d9929-61ac-448c-9a4b-687f06b7b81b\") " pod="openshift-must-gather-b8n6x/crc-debug-98wtp" Jan 31 05:48:56 crc kubenswrapper[4832]: I0131 05:48:56.775329 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/crc-debug-98wtp" Jan 31 05:48:57 crc kubenswrapper[4832]: I0131 05:48:57.072807 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b8n6x/crc-debug-98wtp" event={"ID":"200d9929-61ac-448c-9a4b-687f06b7b81b","Type":"ContainerStarted","Data":"efbeeb64a905406b49fa584ddb5c8ab7b1b100decf9f367b3234d8c20ceaab15"} Jan 31 05:48:58 crc kubenswrapper[4832]: I0131 05:48:58.083804 4832 generic.go:334] "Generic (PLEG): container finished" podID="200d9929-61ac-448c-9a4b-687f06b7b81b" containerID="faa7840a5ebdc1a205d96e6c87eab514880b061e3731dc08150df3c260c1d23e" exitCode=0 Jan 31 05:48:58 crc kubenswrapper[4832]: I0131 05:48:58.083928 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b8n6x/crc-debug-98wtp" event={"ID":"200d9929-61ac-448c-9a4b-687f06b7b81b","Type":"ContainerDied","Data":"faa7840a5ebdc1a205d96e6c87eab514880b061e3731dc08150df3c260c1d23e"} Jan 31 05:48:58 crc kubenswrapper[4832]: I0131 05:48:58.486141 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-b8n6x/crc-debug-98wtp"] Jan 31 05:48:58 crc kubenswrapper[4832]: I0131 05:48:58.497195 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-b8n6x/crc-debug-98wtp"] Jan 31 05:48:59 crc kubenswrapper[4832]: I0131 05:48:59.211650 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/crc-debug-98wtp" Jan 31 05:48:59 crc kubenswrapper[4832]: I0131 05:48:59.269597 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/200d9929-61ac-448c-9a4b-687f06b7b81b-host\") pod \"200d9929-61ac-448c-9a4b-687f06b7b81b\" (UID: \"200d9929-61ac-448c-9a4b-687f06b7b81b\") " Jan 31 05:48:59 crc kubenswrapper[4832]: I0131 05:48:59.269699 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhqck\" (UniqueName: \"kubernetes.io/projected/200d9929-61ac-448c-9a4b-687f06b7b81b-kube-api-access-vhqck\") pod \"200d9929-61ac-448c-9a4b-687f06b7b81b\" (UID: \"200d9929-61ac-448c-9a4b-687f06b7b81b\") " Jan 31 05:48:59 crc kubenswrapper[4832]: I0131 05:48:59.269728 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/200d9929-61ac-448c-9a4b-687f06b7b81b-host" (OuterVolumeSpecName: "host") pod "200d9929-61ac-448c-9a4b-687f06b7b81b" (UID: "200d9929-61ac-448c-9a4b-687f06b7b81b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:48:59 crc kubenswrapper[4832]: I0131 05:48:59.270643 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/200d9929-61ac-448c-9a4b-687f06b7b81b-host\") on node \"crc\" DevicePath \"\"" Jan 31 05:48:59 crc kubenswrapper[4832]: I0131 05:48:59.275747 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200d9929-61ac-448c-9a4b-687f06b7b81b-kube-api-access-vhqck" (OuterVolumeSpecName: "kube-api-access-vhqck") pod "200d9929-61ac-448c-9a4b-687f06b7b81b" (UID: "200d9929-61ac-448c-9a4b-687f06b7b81b"). InnerVolumeSpecName "kube-api-access-vhqck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:48:59 crc kubenswrapper[4832]: I0131 05:48:59.372376 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhqck\" (UniqueName: \"kubernetes.io/projected/200d9929-61ac-448c-9a4b-687f06b7b81b-kube-api-access-vhqck\") on node \"crc\" DevicePath \"\"" Jan 31 05:48:59 crc kubenswrapper[4832]: I0131 05:48:59.868484 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="200d9929-61ac-448c-9a4b-687f06b7b81b" path="/var/lib/kubelet/pods/200d9929-61ac-448c-9a4b-687f06b7b81b/volumes" Jan 31 05:48:59 crc kubenswrapper[4832]: I0131 05:48:59.957723 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b8n6x/crc-debug-l7dx5"] Jan 31 05:48:59 crc kubenswrapper[4832]: E0131 05:48:59.958945 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200d9929-61ac-448c-9a4b-687f06b7b81b" containerName="container-00" Jan 31 05:48:59 crc kubenswrapper[4832]: I0131 05:48:59.958971 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="200d9929-61ac-448c-9a4b-687f06b7b81b" containerName="container-00" Jan 31 05:48:59 crc kubenswrapper[4832]: I0131 05:48:59.959415 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="200d9929-61ac-448c-9a4b-687f06b7b81b" containerName="container-00" Jan 31 05:48:59 crc kubenswrapper[4832]: I0131 05:48:59.960445 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/crc-debug-l7dx5" Jan 31 05:49:00 crc kubenswrapper[4832]: I0131 05:49:00.093459 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55b243f5-a3ad-437e-9691-8ecd92bbcc6f-host\") pod \"crc-debug-l7dx5\" (UID: \"55b243f5-a3ad-437e-9691-8ecd92bbcc6f\") " pod="openshift-must-gather-b8n6x/crc-debug-l7dx5" Jan 31 05:49:00 crc kubenswrapper[4832]: I0131 05:49:00.093594 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9xb9\" (UniqueName: \"kubernetes.io/projected/55b243f5-a3ad-437e-9691-8ecd92bbcc6f-kube-api-access-m9xb9\") pod \"crc-debug-l7dx5\" (UID: \"55b243f5-a3ad-437e-9691-8ecd92bbcc6f\") " pod="openshift-must-gather-b8n6x/crc-debug-l7dx5" Jan 31 05:49:00 crc kubenswrapper[4832]: I0131 05:49:00.101096 4832 scope.go:117] "RemoveContainer" containerID="faa7840a5ebdc1a205d96e6c87eab514880b061e3731dc08150df3c260c1d23e" Jan 31 05:49:00 crc kubenswrapper[4832]: I0131 05:49:00.101144 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/crc-debug-98wtp" Jan 31 05:49:00 crc kubenswrapper[4832]: I0131 05:49:00.195595 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9xb9\" (UniqueName: \"kubernetes.io/projected/55b243f5-a3ad-437e-9691-8ecd92bbcc6f-kube-api-access-m9xb9\") pod \"crc-debug-l7dx5\" (UID: \"55b243f5-a3ad-437e-9691-8ecd92bbcc6f\") " pod="openshift-must-gather-b8n6x/crc-debug-l7dx5" Jan 31 05:49:00 crc kubenswrapper[4832]: I0131 05:49:00.195752 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55b243f5-a3ad-437e-9691-8ecd92bbcc6f-host\") pod \"crc-debug-l7dx5\" (UID: \"55b243f5-a3ad-437e-9691-8ecd92bbcc6f\") " pod="openshift-must-gather-b8n6x/crc-debug-l7dx5" Jan 31 05:49:00 crc kubenswrapper[4832]: I0131 05:49:00.195937 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55b243f5-a3ad-437e-9691-8ecd92bbcc6f-host\") pod \"crc-debug-l7dx5\" (UID: \"55b243f5-a3ad-437e-9691-8ecd92bbcc6f\") " pod="openshift-must-gather-b8n6x/crc-debug-l7dx5" Jan 31 05:49:00 crc kubenswrapper[4832]: I0131 05:49:00.216641 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9xb9\" (UniqueName: \"kubernetes.io/projected/55b243f5-a3ad-437e-9691-8ecd92bbcc6f-kube-api-access-m9xb9\") pod \"crc-debug-l7dx5\" (UID: \"55b243f5-a3ad-437e-9691-8ecd92bbcc6f\") " pod="openshift-must-gather-b8n6x/crc-debug-l7dx5" Jan 31 05:49:00 crc kubenswrapper[4832]: I0131 05:49:00.282655 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/crc-debug-l7dx5" Jan 31 05:49:01 crc kubenswrapper[4832]: I0131 05:49:01.116191 4832 generic.go:334] "Generic (PLEG): container finished" podID="55b243f5-a3ad-437e-9691-8ecd92bbcc6f" containerID="eae0c75e7432f003ffadd282e48a751098ea244c71261829d2e6737618d031c1" exitCode=0 Jan 31 05:49:01 crc kubenswrapper[4832]: I0131 05:49:01.116359 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b8n6x/crc-debug-l7dx5" event={"ID":"55b243f5-a3ad-437e-9691-8ecd92bbcc6f","Type":"ContainerDied","Data":"eae0c75e7432f003ffadd282e48a751098ea244c71261829d2e6737618d031c1"} Jan 31 05:49:01 crc kubenswrapper[4832]: I0131 05:49:01.116630 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b8n6x/crc-debug-l7dx5" event={"ID":"55b243f5-a3ad-437e-9691-8ecd92bbcc6f","Type":"ContainerStarted","Data":"219aaa02871aa50e39e63367f6d4e886d3bdf24bb100bf210329c220cf817f11"} Jan 31 05:49:01 crc kubenswrapper[4832]: I0131 05:49:01.162032 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-b8n6x/crc-debug-l7dx5"] Jan 31 05:49:01 crc kubenswrapper[4832]: I0131 05:49:01.171049 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-b8n6x/crc-debug-l7dx5"] Jan 31 05:49:02 crc kubenswrapper[4832]: I0131 05:49:02.234308 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/crc-debug-l7dx5" Jan 31 05:49:02 crc kubenswrapper[4832]: I0131 05:49:02.346828 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9xb9\" (UniqueName: \"kubernetes.io/projected/55b243f5-a3ad-437e-9691-8ecd92bbcc6f-kube-api-access-m9xb9\") pod \"55b243f5-a3ad-437e-9691-8ecd92bbcc6f\" (UID: \"55b243f5-a3ad-437e-9691-8ecd92bbcc6f\") " Jan 31 05:49:02 crc kubenswrapper[4832]: I0131 05:49:02.347020 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55b243f5-a3ad-437e-9691-8ecd92bbcc6f-host\") pod \"55b243f5-a3ad-437e-9691-8ecd92bbcc6f\" (UID: \"55b243f5-a3ad-437e-9691-8ecd92bbcc6f\") " Jan 31 05:49:02 crc kubenswrapper[4832]: I0131 05:49:02.347166 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55b243f5-a3ad-437e-9691-8ecd92bbcc6f-host" (OuterVolumeSpecName: "host") pod "55b243f5-a3ad-437e-9691-8ecd92bbcc6f" (UID: "55b243f5-a3ad-437e-9691-8ecd92bbcc6f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 05:49:02 crc kubenswrapper[4832]: I0131 05:49:02.347530 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55b243f5-a3ad-437e-9691-8ecd92bbcc6f-host\") on node \"crc\" DevicePath \"\"" Jan 31 05:49:02 crc kubenswrapper[4832]: I0131 05:49:02.357794 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b243f5-a3ad-437e-9691-8ecd92bbcc6f-kube-api-access-m9xb9" (OuterVolumeSpecName: "kube-api-access-m9xb9") pod "55b243f5-a3ad-437e-9691-8ecd92bbcc6f" (UID: "55b243f5-a3ad-437e-9691-8ecd92bbcc6f"). InnerVolumeSpecName "kube-api-access-m9xb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:49:02 crc kubenswrapper[4832]: I0131 05:49:02.448806 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9xb9\" (UniqueName: \"kubernetes.io/projected/55b243f5-a3ad-437e-9691-8ecd92bbcc6f-kube-api-access-m9xb9\") on node \"crc\" DevicePath \"\"" Jan 31 05:49:03 crc kubenswrapper[4832]: I0131 05:49:03.132360 4832 scope.go:117] "RemoveContainer" containerID="eae0c75e7432f003ffadd282e48a751098ea244c71261829d2e6737618d031c1" Jan 31 05:49:03 crc kubenswrapper[4832]: I0131 05:49:03.132380 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/crc-debug-l7dx5" Jan 31 05:49:03 crc kubenswrapper[4832]: I0131 05:49:03.875312 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b243f5-a3ad-437e-9691-8ecd92bbcc6f" path="/var/lib/kubelet/pods/55b243f5-a3ad-437e-9691-8ecd92bbcc6f/volumes" Jan 31 05:49:34 crc kubenswrapper[4832]: I0131 05:49:34.507705 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-797bd69d58-5ff8g_becb2819-84d8-4a62-b98f-75e779ad0f56/barbican-api/0.log" Jan 31 05:49:34 crc kubenswrapper[4832]: I0131 05:49:34.664237 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-797bd69d58-5ff8g_becb2819-84d8-4a62-b98f-75e779ad0f56/barbican-api-log/0.log" Jan 31 05:49:35 crc kubenswrapper[4832]: I0131 05:49:35.115357 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c4c47d5bb-m6mqf_6df8e9f4-654c-449b-b5ce-2fb826d6449c/barbican-keystone-listener-log/0.log" Jan 31 05:49:35 crc kubenswrapper[4832]: I0131 05:49:35.192945 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c4c47d5bb-m6mqf_6df8e9f4-654c-449b-b5ce-2fb826d6449c/barbican-keystone-listener/0.log" Jan 31 05:49:35 crc kubenswrapper[4832]: I0131 05:49:35.305756 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8f9d96795-d8rrf_0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa/barbican-worker/0.log" Jan 31 05:49:35 crc kubenswrapper[4832]: I0131 05:49:35.373878 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8f9d96795-d8rrf_0d4cc16b-16ab-4f2e-9d54-e9dc3b40d9fa/barbican-worker-log/0.log" Jan 31 05:49:35 crc kubenswrapper[4832]: I0131 05:49:35.447472 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-zqmfw_27dc3183-5db8-4c94-8247-f5af07376737/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:35 crc kubenswrapper[4832]: I0131 05:49:35.581728 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9305639f-a8a1-4742-b3d9-fe416bcef2cd/ceilometer-central-agent/0.log" Jan 31 05:49:35 crc kubenswrapper[4832]: I0131 05:49:35.633677 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9305639f-a8a1-4742-b3d9-fe416bcef2cd/ceilometer-notification-agent/0.log" Jan 31 05:49:35 crc kubenswrapper[4832]: I0131 05:49:35.682804 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9305639f-a8a1-4742-b3d9-fe416bcef2cd/proxy-httpd/0.log" Jan 31 05:49:35 crc kubenswrapper[4832]: I0131 05:49:35.735291 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9305639f-a8a1-4742-b3d9-fe416bcef2cd/sg-core/0.log" Jan 31 05:49:35 crc kubenswrapper[4832]: I0131 05:49:35.852028 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7bf54b70-e647-47e2-a8fd-1f15cab614a6/cinder-api/0.log" Jan 31 05:49:35 crc kubenswrapper[4832]: I0131 05:49:35.914814 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7bf54b70-e647-47e2-a8fd-1f15cab614a6/cinder-api-log/0.log" Jan 31 05:49:36 crc kubenswrapper[4832]: I0131 05:49:36.065890 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d4b153e2-4087-4707-a751-3b518f670193/probe/0.log" Jan 31 05:49:36 crc kubenswrapper[4832]: I0131 05:49:36.084839 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d4b153e2-4087-4707-a751-3b518f670193/cinder-scheduler/0.log" Jan 31 05:49:36 crc kubenswrapper[4832]: I0131 05:49:36.272718 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-vn4z4_3b016223-cc19-45ea-9ccb-fc81103e1e5f/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:36 crc kubenswrapper[4832]: I0131 05:49:36.274840 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-9vvrx_915d4541-b4f7-4a50-ba36-3ed09a631c87/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:36 crc kubenswrapper[4832]: I0131 05:49:36.437152 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-c5l4d_b32a39cb-1499-49b0-8407-b2bfd9c3abbb/init/0.log" Jan 31 05:49:36 crc kubenswrapper[4832]: I0131 05:49:36.654491 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-c5l4d_b32a39cb-1499-49b0-8407-b2bfd9c3abbb/init/0.log" Jan 31 05:49:36 crc kubenswrapper[4832]: I0131 05:49:36.719172 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-hfxnx_ecaf2da0-d078-4810-9574-05b12bd09288/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:36 crc kubenswrapper[4832]: I0131 05:49:36.730419 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-c5l4d_b32a39cb-1499-49b0-8407-b2bfd9c3abbb/dnsmasq-dns/0.log" Jan 31 05:49:36 crc kubenswrapper[4832]: I0131 05:49:36.909739 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ace9e44a-55e5-48ae-9e2e-533ab30a5cd8/glance-log/0.log" Jan 31 05:49:36 crc kubenswrapper[4832]: I0131 05:49:36.937265 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ace9e44a-55e5-48ae-9e2e-533ab30a5cd8/glance-httpd/0.log" Jan 31 05:49:37 crc kubenswrapper[4832]: I0131 05:49:37.125904 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0720e9f6-21f1-43e9-b075-a35d548f4af9/glance-httpd/0.log" Jan 31 05:49:37 crc kubenswrapper[4832]: I0131 05:49:37.163305 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0720e9f6-21f1-43e9-b075-a35d548f4af9/glance-log/0.log" Jan 31 05:49:37 crc kubenswrapper[4832]: I0131 05:49:37.266192 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f6b9f547b-mrjcq_769ea643-f342-413c-a719-7c65e086b9eb/horizon/0.log" Jan 31 05:49:37 crc kubenswrapper[4832]: I0131 05:49:37.472372 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mzk88_8ab8bc58-9ae3-4284-b959-164da6ebee5e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:37 crc kubenswrapper[4832]: I0131 05:49:37.632867 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-dgr66_23b0f31e-31d6-4f12-91d5-fe078d89dfb7/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:37 crc kubenswrapper[4832]: I0131 05:49:37.711010 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f6b9f547b-mrjcq_769ea643-f342-413c-a719-7c65e086b9eb/horizon-log/0.log" Jan 31 05:49:37 crc kubenswrapper[4832]: I0131 05:49:37.864209 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79bb65dc58-kdbq7_a8150cab-aaf2-42f5-8148-ffb124e56569/keystone-api/0.log" Jan 31 05:49:37 crc kubenswrapper[4832]: I0131 05:49:37.896336 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_bca479b9-47c2-4c05-9b4c-dbde78e18be7/kube-state-metrics/0.log" Jan 31 05:49:38 crc kubenswrapper[4832]: I0131 05:49:38.007864 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4qppj_89932c58-5727-49df-bd91-903acb18f444/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:38 crc kubenswrapper[4832]: I0131 05:49:38.381434 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c7c54d8bf-w9s7x_9192a7c5-49bb-4fed-858e-0c14b96f1288/neutron-httpd/0.log" Jan 31 05:49:38 crc kubenswrapper[4832]: I0131 05:49:38.520774 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-vfhfs_ce3f980d-61a1-4d42-8b56-f7a064c667da/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:38 crc kubenswrapper[4832]: I0131 05:49:38.545688 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c7c54d8bf-w9s7x_9192a7c5-49bb-4fed-858e-0c14b96f1288/neutron-api/0.log" Jan 31 05:49:39 crc kubenswrapper[4832]: I0131 05:49:39.230419 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_bbfa7e6d-7200-4b32-9749-d04865e74d5e/nova-cell0-conductor-conductor/0.log" Jan 31 05:49:39 crc kubenswrapper[4832]: I0131 05:49:39.331518 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3f25cab2-43da-43e5-9cb7-78112bf8ea08/nova-api-log/0.log" Jan 31 05:49:39 crc kubenswrapper[4832]: I0131 05:49:39.523324 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d6ec6693-e464-4258-a3fa-fef2b9c97bae/nova-cell1-conductor-conductor/0.log" Jan 31 05:49:39 crc kubenswrapper[4832]: I0131 05:49:39.687303 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_e209d1b6-1bc1-4667-ad99-4b2cf348f2b7/nova-cell1-novncproxy-novncproxy/0.log" Jan 31 05:49:39 crc kubenswrapper[4832]: I0131 05:49:39.714890 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3f25cab2-43da-43e5-9cb7-78112bf8ea08/nova-api-api/0.log" Jan 31 05:49:39 crc kubenswrapper[4832]: I0131 05:49:39.791592 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-p9rvr_3b3b6eae-8f54-4057-b9c8-74f27b762ada/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:40 crc kubenswrapper[4832]: I0131 05:49:40.024012 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_83d7f27d-9408-4f6b-ab25-a0f453cc377e/nova-metadata-log/0.log" Jan 31 05:49:40 crc kubenswrapper[4832]: I0131 05:49:40.315647 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c6177d8c-3ae2-4aee-87ac-eefdc96806e6/mysql-bootstrap/0.log" Jan 31 05:49:40 crc kubenswrapper[4832]: I0131 05:49:40.415763 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e97fe0a4-c76d-439e-a096-460328d1d9d4/nova-scheduler-scheduler/0.log" Jan 31 05:49:40 crc kubenswrapper[4832]: I0131 05:49:40.503317 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c6177d8c-3ae2-4aee-87ac-eefdc96806e6/mysql-bootstrap/0.log" Jan 31 05:49:40 crc kubenswrapper[4832]: I0131 05:49:40.535449 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c6177d8c-3ae2-4aee-87ac-eefdc96806e6/galera/0.log" Jan 31 05:49:40 crc kubenswrapper[4832]: I0131 05:49:40.706244 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b9bfe69c-78b0-4982-b9ab-7aa41bd071ec/mysql-bootstrap/0.log" Jan 31 05:49:40 crc kubenswrapper[4832]: I0131 05:49:40.889891 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b9bfe69c-78b0-4982-b9ab-7aa41bd071ec/galera/0.log" Jan 31 05:49:40 crc kubenswrapper[4832]: I0131 05:49:40.900923 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b9bfe69c-78b0-4982-b9ab-7aa41bd071ec/mysql-bootstrap/0.log" Jan 31 05:49:41 crc kubenswrapper[4832]: I0131 05:49:41.083851 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1847eb5f-c952-4d08-8579-786994ad5c56/openstackclient/0.log" Jan 31 05:49:41 crc kubenswrapper[4832]: I0131 05:49:41.205756 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-8sq59_103522f1-37d5-48e1-8004-ab58b154d040/ovn-controller/0.log" Jan 31 05:49:41 crc kubenswrapper[4832]: I0131 05:49:41.335846 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-srkd5_d7e9680d-d2db-4c26-99be-f2e6331d64bf/openstack-network-exporter/0.log" Jan 31 05:49:41 crc kubenswrapper[4832]: I0131 05:49:41.364111 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_83d7f27d-9408-4f6b-ab25-a0f453cc377e/nova-metadata-metadata/0.log" Jan 31 05:49:41 crc kubenswrapper[4832]: I0131 05:49:41.489826 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nmcpt_61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f/ovsdb-server-init/0.log" Jan 31 05:49:41 crc kubenswrapper[4832]: I0131 05:49:41.675667 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nmcpt_61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f/ovsdb-server/0.log" Jan 31 05:49:41 crc kubenswrapper[4832]: I0131 05:49:41.711211 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nmcpt_61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f/ovs-vswitchd/0.log" Jan 31 05:49:41 crc kubenswrapper[4832]: I0131 05:49:41.755151 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nmcpt_61b751c2-a6a2-4d2f-aa98-5ac3dff4fb4f/ovsdb-server-init/0.log" Jan 31 05:49:41 crc kubenswrapper[4832]: I0131 05:49:41.897492 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-pvjvj_70dab5d9-fca1-425f-91e9-42b0013c2e64/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:41 crc kubenswrapper[4832]: I0131 05:49:41.961646 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8/openstack-network-exporter/0.log" Jan 31 05:49:41 crc kubenswrapper[4832]: I0131 05:49:41.962574 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3e0048ad-269d-4c8f-8920-0f0b0b3e9fc8/ovn-northd/0.log" Jan 31 05:49:42 crc kubenswrapper[4832]: I0131 05:49:42.147179 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b05c379c-cf2f-4179-a902-475d2a555294/openstack-network-exporter/0.log" Jan 31 05:49:42 crc kubenswrapper[4832]: I0131 05:49:42.173655 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b05c379c-cf2f-4179-a902-475d2a555294/ovsdbserver-nb/0.log" Jan 31 05:49:42 crc kubenswrapper[4832]: I0131 05:49:42.366820 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e2edd879-2e11-41b2-872a-1f50cf71719f/openstack-network-exporter/0.log" Jan 31 05:49:42 crc kubenswrapper[4832]: I0131 05:49:42.376469 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e2edd879-2e11-41b2-872a-1f50cf71719f/ovsdbserver-sb/0.log" Jan 31 05:49:43 crc kubenswrapper[4832]: I0131 05:49:43.065459 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-548576cf8d-gz7f7_0bfb9b89-7b02-4f5a-b967-d84ad8e20325/placement-log/0.log" Jan 31 05:49:43 crc kubenswrapper[4832]: I0131 05:49:43.075645 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f8cc578f-3827-4100-aa82-e6cf59602353/setup-container/0.log" Jan 31 05:49:43 crc kubenswrapper[4832]: I0131 05:49:43.116074 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-548576cf8d-gz7f7_0bfb9b89-7b02-4f5a-b967-d84ad8e20325/placement-api/0.log" Jan 31 05:49:43 crc kubenswrapper[4832]: I0131 05:49:43.281344 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f8cc578f-3827-4100-aa82-e6cf59602353/setup-container/0.log" Jan 31 05:49:43 crc kubenswrapper[4832]: I0131 05:49:43.309719 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f8cc578f-3827-4100-aa82-e6cf59602353/rabbitmq/0.log" Jan 31 05:49:43 crc kubenswrapper[4832]: I0131 05:49:43.367703 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f59551b3-d149-4bf1-90e2-428e0615f1ce/setup-container/0.log" Jan 31 05:49:43 crc kubenswrapper[4832]: I0131 05:49:43.602803 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f59551b3-d149-4bf1-90e2-428e0615f1ce/setup-container/0.log" Jan 31 05:49:43 crc kubenswrapper[4832]: I0131 05:49:43.657876 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-nbcrq_cafe239e-692a-4f8c-baf3-94b454ed706d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:43 crc kubenswrapper[4832]: I0131 05:49:43.708065 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f59551b3-d149-4bf1-90e2-428e0615f1ce/rabbitmq/0.log" Jan 31 05:49:43 crc kubenswrapper[4832]: I0131 05:49:43.840090 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-l9vkx_46cb5cd9-ca77-4c57-9d83-b4ef015da993/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:43 crc kubenswrapper[4832]: I0131 05:49:43.945190 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fqkh7_96b10887-6c77-4792-ae1d-87209c13b9fc/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:44 crc kubenswrapper[4832]: I0131 05:49:44.075530 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-zzp8g_eac023bd-8a06-4be3-9c44-a29c87e4c44c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:44 crc kubenswrapper[4832]: I0131 05:49:44.162990 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gmglt_26297c57-667f-414b-912c-2bfa05b73299/ssh-known-hosts-edpm-deployment/0.log" Jan 31 05:49:44 crc kubenswrapper[4832]: I0131 05:49:44.668767 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6df44bf7d7-6dwfp_a8780918-f34b-41e5-9ccc-d12823931da5/proxy-server/0.log" Jan 31 05:49:44 crc kubenswrapper[4832]: I0131 05:49:44.795188 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6df44bf7d7-6dwfp_a8780918-f34b-41e5-9ccc-d12823931da5/proxy-httpd/0.log" Jan 31 05:49:44 crc kubenswrapper[4832]: I0131 05:49:44.859090 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-bwd8q_2d790d64-4815-452e-9f17-13b1b9b75c35/swift-ring-rebalance/0.log" Jan 31 05:49:44 crc kubenswrapper[4832]: I0131 05:49:44.955962 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/account-auditor/0.log" Jan 31 05:49:45 crc kubenswrapper[4832]: I0131 05:49:45.044308 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/account-reaper/0.log" Jan 31 05:49:45 crc kubenswrapper[4832]: I0131 05:49:45.120441 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/account-replicator/0.log" Jan 31 05:49:45 crc kubenswrapper[4832]: I0131 05:49:45.193606 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/account-server/0.log" Jan 31 05:49:45 crc kubenswrapper[4832]: I0131 05:49:45.224222 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/container-auditor/0.log" Jan 31 05:49:45 crc kubenswrapper[4832]: I0131 05:49:45.276179 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/container-replicator/0.log" Jan 31 05:49:45 crc kubenswrapper[4832]: I0131 05:49:45.319673 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/container-server/0.log" Jan 31 05:49:45 crc kubenswrapper[4832]: I0131 05:49:45.374147 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/container-updater/0.log" Jan 31 05:49:45 crc kubenswrapper[4832]: I0131 05:49:45.459940 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/object-auditor/0.log" Jan 31 05:49:45 crc kubenswrapper[4832]: I0131 05:49:45.475666 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/object-expirer/0.log" Jan 31 05:49:45 crc kubenswrapper[4832]: I0131 05:49:45.555213 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/object-server/0.log" Jan 31 05:49:45 crc kubenswrapper[4832]: I0131 05:49:45.586723 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/object-replicator/0.log" Jan 31 05:49:45 crc kubenswrapper[4832]: I0131 05:49:45.652274 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/object-updater/0.log" Jan 31 05:49:45 crc kubenswrapper[4832]: I0131 05:49:45.679135 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/rsync/0.log" Jan 31 05:49:45 crc kubenswrapper[4832]: I0131 05:49:45.775982 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e5c7a77d-3a2a-4e4b-88f8-87a99bf6d087/swift-recon-cron/0.log" Jan 31 05:49:45 crc kubenswrapper[4832]: I0131 05:49:45.930696 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-vcwx6_02aa5c8f-25f9-43a0-9d6e-dd67d7348443/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:46 crc kubenswrapper[4832]: I0131 05:49:46.017716 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_cf637281-101a-4e11-93b6-74f55d914798/tempest-tests-tempest-tests-runner/0.log" Jan 31 05:49:46 crc kubenswrapper[4832]: I0131 05:49:46.111834 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9b96ff1a-9380-4bdc-a490-bdfa6e760792/test-operator-logs-container/0.log" Jan 31 05:49:46 crc kubenswrapper[4832]: I0131 05:49:46.227568 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4qqzc_945ad601-23f1-4494-a2a8-6bf53b841d2f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 05:49:56 crc kubenswrapper[4832]: I0131 05:49:56.659475 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_86b5161e-fa9c-4b0d-9549-2ab191b90e33/memcached/0.log" Jan 31 05:50:13 crc kubenswrapper[4832]: I0131 05:50:13.086213 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-zr7l4_0e40cd6e-2cdb-4a24-82d8-d27fc4feb14d/manager/0.log" Jan 31 05:50:13 crc kubenswrapper[4832]: I0131 05:50:13.092664 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-5hl82_c20a8bcb-8431-4318-9e7d-8f4ccaddfa8b/manager/0.log" Jan 31 05:50:13 crc kubenswrapper[4832]: I0131 05:50:13.316246 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-gc6zx_805b9f0e-cb57-4b71-b199-b8ee289af169/manager/0.log" Jan 31 05:50:13 crc kubenswrapper[4832]: I0131 05:50:13.461986 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm_9ddd6955-162a-4923-b841-4eb20989be7f/util/0.log" Jan 31 05:50:13 crc kubenswrapper[4832]: I0131 05:50:13.659006 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm_9ddd6955-162a-4923-b841-4eb20989be7f/util/0.log" Jan 31 05:50:13 crc kubenswrapper[4832]: I0131 05:50:13.669119 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm_9ddd6955-162a-4923-b841-4eb20989be7f/pull/0.log" Jan 31 05:50:13 crc kubenswrapper[4832]: I0131 05:50:13.693850 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm_9ddd6955-162a-4923-b841-4eb20989be7f/pull/0.log" Jan 31 05:50:13 crc kubenswrapper[4832]: I0131 05:50:13.829223 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm_9ddd6955-162a-4923-b841-4eb20989be7f/util/0.log" Jan 31 05:50:13 crc kubenswrapper[4832]: I0131 05:50:13.875444 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm_9ddd6955-162a-4923-b841-4eb20989be7f/extract/0.log" Jan 31 05:50:13 crc kubenswrapper[4832]: I0131 05:50:13.898920 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fe147c7c50c3af41c79e1995dbdfb37fc0b559058ba0f7094cae3230e4nxfvm_9ddd6955-162a-4923-b841-4eb20989be7f/pull/0.log" Jan 31 05:50:14 crc kubenswrapper[4832]: I0131 05:50:14.087523 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-pnktz_9d690743-c300-46c9-83d7-c416ba5aff83/manager/0.log" Jan 31 05:50:14 crc kubenswrapper[4832]: I0131 05:50:14.087620 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-2hr2t_141c81b8-f2f6-4f96-9ac7-83305f4eabd0/manager/0.log" Jan 31 05:50:14 crc kubenswrapper[4832]: I0131 05:50:14.260107 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-4m8x2_b23fff55-653f-417a-9f77-d7b115586ade/manager/0.log" Jan 31 05:50:14 crc kubenswrapper[4832]: I0131 05:50:14.474215 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-4445f_830a5967-5b56-4c70-8940-ef90cd945807/manager/0.log" Jan 31 05:50:14 crc kubenswrapper[4832]: I0131 05:50:14.521130 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57997b5fcd-hjsbn_48118fb9-dcf4-45f5-8096-c558f980eab4/manager/0.log" Jan 31 05:50:14 crc kubenswrapper[4832]: I0131 05:50:14.663429 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-jl4p7_0ac9e56c-c068-4fec-98d8-8d44e1fa6ccd/manager/0.log" Jan 31 05:50:14 crc kubenswrapper[4832]: I0131 05:50:14.767542 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-jtr9c_ea8d7014-c1f0-4b4f-aa01-7865124c3187/manager/0.log" Jan 31 05:50:14 crc kubenswrapper[4832]: I0131 05:50:14.864815 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-fdxks_7c8b7b2a-0a2f-4a69-b538-6afa0fb7e138/manager/0.log" Jan 31 05:50:15 crc kubenswrapper[4832]: I0131 05:50:15.027298 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-c9pxb_e7569d69-3ad6-4127-a49f-a16706a35099/manager/0.log" Jan 31 05:50:15 crc kubenswrapper[4832]: I0131 05:50:15.120403 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-jcrbt_d86d3d02-d07f-4bf0-a01a-18652faa5111/manager/0.log" Jan 31 05:50:15 crc kubenswrapper[4832]: I0131 05:50:15.220058 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-8xwhl_049ad615-904c-4043-b395-dd242e743140/manager/0.log" Jan 31 05:50:15 crc kubenswrapper[4832]: I0131 05:50:15.315833 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dkndkx_f59551da-68de-4704-98fd-d9355e69c5af/manager/0.log" Jan 31 05:50:15 crc kubenswrapper[4832]: I0131 05:50:15.595836 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6cbc497cdb-zc65j_f76fb23f-871d-459c-b196-8e33703f7e44/operator/0.log" Jan 31 05:50:15 crc kubenswrapper[4832]: I0131 05:50:15.699612 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-sxhvd_09997099-ea53-4947-b5ec-eaed51db7a12/registry-server/0.log" Jan 31 05:50:15 crc kubenswrapper[4832]: I0131 05:50:15.875215 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-t6wm8_6d39a5f9-a0b0-4a9e-871b-30a307adfd3d/manager/0.log" Jan 31 05:50:16 crc kubenswrapper[4832]: I0131 05:50:16.061417 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-9ptdq_acc59b3c-877b-4a0e-a118-76a05d362ad5/manager/0.log" Jan 31 05:50:16 crc kubenswrapper[4832]: I0131 05:50:16.256810 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8sg7w_f4c4da15-1fd0-4a3f-962e-6a9c4ce10cf3/operator/0.log" Jan 31 05:50:16 crc kubenswrapper[4832]: I0131 05:50:16.436724 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-m49v4_aec1c2a0-52b1-4a2e-8986-1e12be79d67c/manager/0.log" Jan 31 05:50:16 crc kubenswrapper[4832]: I0131 05:50:16.669453 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-r2nnw_551db33b-8ad9-4a8c-9275-2c19c1104232/manager/0.log" Jan 31 05:50:16 crc kubenswrapper[4832]: I0131 05:50:16.761752 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-68ffd75798-45z7h_fedc767a-c749-4373-84ab-c32673c34e40/manager/0.log" Jan 31 05:50:16 crc kubenswrapper[4832]: I0131 05:50:16.814214 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-2vw9x_9ba96940-214f-41b4-a1a2-ecdeced92715/manager/0.log" Jan 31 05:50:16 crc kubenswrapper[4832]: I0131 05:50:16.894486 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-7t5cl_e6c1771e-5f66-444c-8718-e6022bbbb473/manager/0.log" Jan 31 05:50:35 crc kubenswrapper[4832]: I0131 05:50:35.581078 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-9kpwz_5797b6b7-298b-4e04-8945-0a733f37feaa/control-plane-machine-set-operator/0.log" Jan 31 05:50:35 crc kubenswrapper[4832]: I0131 05:50:35.777800 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vn2qs_9ed37686-689b-46e5-8069-0e4de3519afb/kube-rbac-proxy/0.log" Jan 31 05:50:35 crc kubenswrapper[4832]: I0131 05:50:35.813957 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vn2qs_9ed37686-689b-46e5-8069-0e4de3519afb/machine-api-operator/0.log" Jan 31 05:50:48 crc kubenswrapper[4832]: I0131 05:50:48.242830 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xqw4t_c0e95955-9451-49d7-89f9-daff9bd04f21/cert-manager-controller/0.log" Jan 31 05:50:48 crc kubenswrapper[4832]: I0131 05:50:48.398779 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-465dm_82dfe439-0519-43f4-867f-b68944898393/cert-manager-cainjector/0.log" Jan 31 05:50:48 crc kubenswrapper[4832]: I0131 05:50:48.472852 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-26nzk_0ebb0bad-994a-4c2a-b9d2-21f38ee3939a/cert-manager-webhook/0.log" Jan 31 05:51:01 crc kubenswrapper[4832]: I0131 05:51:01.594354 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-nl2wt_6abddfff-a35d-4b7a-aeba-354c6b045b6f/nmstate-console-plugin/0.log" Jan 31 05:51:01 crc kubenswrapper[4832]: I0131 05:51:01.759455 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-z9vf4_f14f6771-126c-41a5-9810-7e4ed01aae96/nmstate-handler/0.log" Jan 31 05:51:01 crc kubenswrapper[4832]: I0131 05:51:01.781995 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-m2d42_668e7e0f-218c-48f9-a40b-13f83d5bf7b9/kube-rbac-proxy/0.log" Jan 31 05:51:01 crc kubenswrapper[4832]: I0131 05:51:01.810994 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-m2d42_668e7e0f-218c-48f9-a40b-13f83d5bf7b9/nmstate-metrics/0.log" Jan 31 05:51:01 crc kubenswrapper[4832]: I0131 05:51:01.941818 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-wgqxf_a73e7d2a-36f0-49e9-82ab-11ede6b1761b/nmstate-operator/0.log" Jan 31 05:51:02 crc kubenswrapper[4832]: I0131 05:51:02.002956 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-nwkdn_c78413bf-08b7-4f63-b849-6206713fe6af/nmstate-webhook/0.log" Jan 31 05:51:05 crc kubenswrapper[4832]: I0131 05:51:05.255711 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sgb8s"] Jan 31 05:51:05 crc kubenswrapper[4832]: E0131 05:51:05.256544 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b243f5-a3ad-437e-9691-8ecd92bbcc6f" containerName="container-00" Jan 31 05:51:05 crc kubenswrapper[4832]: I0131 05:51:05.320753 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b243f5-a3ad-437e-9691-8ecd92bbcc6f" containerName="container-00" Jan 31 05:51:05 crc kubenswrapper[4832]: I0131 05:51:05.321211 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b243f5-a3ad-437e-9691-8ecd92bbcc6f" containerName="container-00" Jan 31 05:51:05 crc kubenswrapper[4832]: I0131 05:51:05.322445 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sgb8s"] Jan 31 05:51:05 crc kubenswrapper[4832]: I0131 05:51:05.322536 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:05 crc kubenswrapper[4832]: I0131 05:51:05.351372 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb35761-f014-43c2-8e9a-9d63fb9784f2-utilities\") pod \"redhat-operators-sgb8s\" (UID: \"0bb35761-f014-43c2-8e9a-9d63fb9784f2\") " pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:05 crc kubenswrapper[4832]: I0131 05:51:05.351457 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq4q4\" (UniqueName: \"kubernetes.io/projected/0bb35761-f014-43c2-8e9a-9d63fb9784f2-kube-api-access-hq4q4\") pod \"redhat-operators-sgb8s\" (UID: \"0bb35761-f014-43c2-8e9a-9d63fb9784f2\") " pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:05 crc kubenswrapper[4832]: I0131 05:51:05.351502 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb35761-f014-43c2-8e9a-9d63fb9784f2-catalog-content\") pod \"redhat-operators-sgb8s\" (UID: \"0bb35761-f014-43c2-8e9a-9d63fb9784f2\") " pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:05 crc kubenswrapper[4832]: I0131 05:51:05.453068 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb35761-f014-43c2-8e9a-9d63fb9784f2-utilities\") pod \"redhat-operators-sgb8s\" (UID: \"0bb35761-f014-43c2-8e9a-9d63fb9784f2\") " pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:05 crc kubenswrapper[4832]: I0131 05:51:05.453163 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq4q4\" (UniqueName: \"kubernetes.io/projected/0bb35761-f014-43c2-8e9a-9d63fb9784f2-kube-api-access-hq4q4\") pod \"redhat-operators-sgb8s\" (UID: \"0bb35761-f014-43c2-8e9a-9d63fb9784f2\") " pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:05 crc kubenswrapper[4832]: I0131 05:51:05.453215 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb35761-f014-43c2-8e9a-9d63fb9784f2-catalog-content\") pod \"redhat-operators-sgb8s\" (UID: \"0bb35761-f014-43c2-8e9a-9d63fb9784f2\") " pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:05 crc kubenswrapper[4832]: I0131 05:51:05.453662 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb35761-f014-43c2-8e9a-9d63fb9784f2-utilities\") pod \"redhat-operators-sgb8s\" (UID: \"0bb35761-f014-43c2-8e9a-9d63fb9784f2\") " pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:05 crc kubenswrapper[4832]: I0131 05:51:05.453702 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb35761-f014-43c2-8e9a-9d63fb9784f2-catalog-content\") pod \"redhat-operators-sgb8s\" (UID: \"0bb35761-f014-43c2-8e9a-9d63fb9784f2\") " pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:05 crc kubenswrapper[4832]: I0131 05:51:05.477007 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq4q4\" (UniqueName: \"kubernetes.io/projected/0bb35761-f014-43c2-8e9a-9d63fb9784f2-kube-api-access-hq4q4\") pod \"redhat-operators-sgb8s\" (UID: \"0bb35761-f014-43c2-8e9a-9d63fb9784f2\") " pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:05 crc kubenswrapper[4832]: I0131 05:51:05.644016 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:06 crc kubenswrapper[4832]: I0131 05:51:06.124691 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sgb8s"] Jan 31 05:51:06 crc kubenswrapper[4832]: I0131 05:51:06.247694 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgb8s" event={"ID":"0bb35761-f014-43c2-8e9a-9d63fb9784f2","Type":"ContainerStarted","Data":"b1d79db98df1a5a319ae4de1d0d9de1fa7986fe3401d0e62815561054685b5da"} Jan 31 05:51:06 crc kubenswrapper[4832]: E0131 05:51:06.443882 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bb35761_f014_43c2_8e9a_9d63fb9784f2.slice/crio-conmon-310c7760901d90eb51cf03102f79e153cbacee7c20d589ccdcedd1fbb67fd807.scope\": RecentStats: unable to find data in memory cache]" Jan 31 05:51:07 crc kubenswrapper[4832]: I0131 05:51:07.256978 4832 generic.go:334] "Generic (PLEG): container finished" podID="0bb35761-f014-43c2-8e9a-9d63fb9784f2" containerID="310c7760901d90eb51cf03102f79e153cbacee7c20d589ccdcedd1fbb67fd807" exitCode=0 Jan 31 05:51:07 crc kubenswrapper[4832]: I0131 05:51:07.257274 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgb8s" event={"ID":"0bb35761-f014-43c2-8e9a-9d63fb9784f2","Type":"ContainerDied","Data":"310c7760901d90eb51cf03102f79e153cbacee7c20d589ccdcedd1fbb67fd807"} Jan 31 05:51:07 crc kubenswrapper[4832]: I0131 05:51:07.259517 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:51:08 crc kubenswrapper[4832]: I0131 05:51:08.266521 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgb8s" event={"ID":"0bb35761-f014-43c2-8e9a-9d63fb9784f2","Type":"ContainerStarted","Data":"3a5d59d3654292408696c8de6c43b4d2d6ae2588219cca367b92293edd6bdfe4"} Jan 31 05:51:09 crc kubenswrapper[4832]: I0131 05:51:09.276616 4832 generic.go:334] "Generic (PLEG): container finished" podID="0bb35761-f014-43c2-8e9a-9d63fb9784f2" containerID="3a5d59d3654292408696c8de6c43b4d2d6ae2588219cca367b92293edd6bdfe4" exitCode=0 Jan 31 05:51:09 crc kubenswrapper[4832]: I0131 05:51:09.276685 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgb8s" event={"ID":"0bb35761-f014-43c2-8e9a-9d63fb9784f2","Type":"ContainerDied","Data":"3a5d59d3654292408696c8de6c43b4d2d6ae2588219cca367b92293edd6bdfe4"} Jan 31 05:51:11 crc kubenswrapper[4832]: I0131 05:51:11.295036 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgb8s" event={"ID":"0bb35761-f014-43c2-8e9a-9d63fb9784f2","Type":"ContainerStarted","Data":"85d7f0c4ef1361928ab4c00ed35fe0ade99ce369c5601551cc39c4f59d2a70d0"} Jan 31 05:51:11 crc kubenswrapper[4832]: I0131 05:51:11.320393 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sgb8s" podStartSLOduration=3.448495542 podStartE2EDuration="6.320373156s" podCreationTimestamp="2026-01-31 05:51:05 +0000 UTC" firstStartedPulling="2026-01-31 05:51:07.259264763 +0000 UTC m=+4076.208086448" lastFinishedPulling="2026-01-31 05:51:10.131142377 +0000 UTC m=+4079.079964062" observedRunningTime="2026-01-31 05:51:11.320321584 +0000 UTC m=+4080.269143279" watchObservedRunningTime="2026-01-31 05:51:11.320373156 +0000 UTC m=+4080.269194841" Jan 31 05:51:15 crc kubenswrapper[4832]: I0131 05:51:15.645199 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:15 crc kubenswrapper[4832]: I0131 05:51:15.645897 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:16 crc kubenswrapper[4832]: I0131 05:51:16.710145 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sgb8s" podUID="0bb35761-f014-43c2-8e9a-9d63fb9784f2" containerName="registry-server" probeResult="failure" output=< Jan 31 05:51:16 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Jan 31 05:51:16 crc kubenswrapper[4832]: > Jan 31 05:51:18 crc kubenswrapper[4832]: I0131 05:51:18.539987 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:51:18 crc kubenswrapper[4832]: I0131 05:51:18.540325 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:51:25 crc kubenswrapper[4832]: I0131 05:51:25.684274 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:25 crc kubenswrapper[4832]: I0131 05:51:25.730633 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:25 crc kubenswrapper[4832]: I0131 05:51:25.925144 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sgb8s"] Jan 31 05:51:27 crc kubenswrapper[4832]: I0131 05:51:27.444238 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sgb8s" podUID="0bb35761-f014-43c2-8e9a-9d63fb9784f2" containerName="registry-server" containerID="cri-o://85d7f0c4ef1361928ab4c00ed35fe0ade99ce369c5601551cc39c4f59d2a70d0" gracePeriod=2 Jan 31 05:51:27 crc kubenswrapper[4832]: I0131 05:51:27.865677 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.009461 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq4q4\" (UniqueName: \"kubernetes.io/projected/0bb35761-f014-43c2-8e9a-9d63fb9784f2-kube-api-access-hq4q4\") pod \"0bb35761-f014-43c2-8e9a-9d63fb9784f2\" (UID: \"0bb35761-f014-43c2-8e9a-9d63fb9784f2\") " Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.010772 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb35761-f014-43c2-8e9a-9d63fb9784f2-catalog-content\") pod \"0bb35761-f014-43c2-8e9a-9d63fb9784f2\" (UID: \"0bb35761-f014-43c2-8e9a-9d63fb9784f2\") " Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.011004 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb35761-f014-43c2-8e9a-9d63fb9784f2-utilities\") pod \"0bb35761-f014-43c2-8e9a-9d63fb9784f2\" (UID: \"0bb35761-f014-43c2-8e9a-9d63fb9784f2\") " Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.011705 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bb35761-f014-43c2-8e9a-9d63fb9784f2-utilities" (OuterVolumeSpecName: "utilities") pod "0bb35761-f014-43c2-8e9a-9d63fb9784f2" (UID: "0bb35761-f014-43c2-8e9a-9d63fb9784f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.012134 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb35761-f014-43c2-8e9a-9d63fb9784f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.020801 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb35761-f014-43c2-8e9a-9d63fb9784f2-kube-api-access-hq4q4" (OuterVolumeSpecName: "kube-api-access-hq4q4") pod "0bb35761-f014-43c2-8e9a-9d63fb9784f2" (UID: "0bb35761-f014-43c2-8e9a-9d63fb9784f2"). InnerVolumeSpecName "kube-api-access-hq4q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.113627 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq4q4\" (UniqueName: \"kubernetes.io/projected/0bb35761-f014-43c2-8e9a-9d63fb9784f2-kube-api-access-hq4q4\") on node \"crc\" DevicePath \"\"" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.117642 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bb35761-f014-43c2-8e9a-9d63fb9784f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bb35761-f014-43c2-8e9a-9d63fb9784f2" (UID: "0bb35761-f014-43c2-8e9a-9d63fb9784f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.214918 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb35761-f014-43c2-8e9a-9d63fb9784f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.454405 4832 generic.go:334] "Generic (PLEG): container finished" podID="0bb35761-f014-43c2-8e9a-9d63fb9784f2" containerID="85d7f0c4ef1361928ab4c00ed35fe0ade99ce369c5601551cc39c4f59d2a70d0" exitCode=0 Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.454480 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sgb8s" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.454479 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgb8s" event={"ID":"0bb35761-f014-43c2-8e9a-9d63fb9784f2","Type":"ContainerDied","Data":"85d7f0c4ef1361928ab4c00ed35fe0ade99ce369c5601551cc39c4f59d2a70d0"} Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.455742 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sgb8s" event={"ID":"0bb35761-f014-43c2-8e9a-9d63fb9784f2","Type":"ContainerDied","Data":"b1d79db98df1a5a319ae4de1d0d9de1fa7986fe3401d0e62815561054685b5da"} Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.455767 4832 scope.go:117] "RemoveContainer" containerID="85d7f0c4ef1361928ab4c00ed35fe0ade99ce369c5601551cc39c4f59d2a70d0" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.486289 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sgb8s"] Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.487655 4832 scope.go:117] "RemoveContainer" containerID="3a5d59d3654292408696c8de6c43b4d2d6ae2588219cca367b92293edd6bdfe4" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.496483 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sgb8s"] Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.512235 4832 scope.go:117] "RemoveContainer" containerID="310c7760901d90eb51cf03102f79e153cbacee7c20d589ccdcedd1fbb67fd807" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.558313 4832 scope.go:117] "RemoveContainer" containerID="85d7f0c4ef1361928ab4c00ed35fe0ade99ce369c5601551cc39c4f59d2a70d0" Jan 31 05:51:28 crc kubenswrapper[4832]: E0131 05:51:28.558772 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85d7f0c4ef1361928ab4c00ed35fe0ade99ce369c5601551cc39c4f59d2a70d0\": container with ID starting with 85d7f0c4ef1361928ab4c00ed35fe0ade99ce369c5601551cc39c4f59d2a70d0 not found: ID does not exist" containerID="85d7f0c4ef1361928ab4c00ed35fe0ade99ce369c5601551cc39c4f59d2a70d0" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.558826 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85d7f0c4ef1361928ab4c00ed35fe0ade99ce369c5601551cc39c4f59d2a70d0"} err="failed to get container status \"85d7f0c4ef1361928ab4c00ed35fe0ade99ce369c5601551cc39c4f59d2a70d0\": rpc error: code = NotFound desc = could not find container \"85d7f0c4ef1361928ab4c00ed35fe0ade99ce369c5601551cc39c4f59d2a70d0\": container with ID starting with 85d7f0c4ef1361928ab4c00ed35fe0ade99ce369c5601551cc39c4f59d2a70d0 not found: ID does not exist" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.558858 4832 scope.go:117] "RemoveContainer" containerID="3a5d59d3654292408696c8de6c43b4d2d6ae2588219cca367b92293edd6bdfe4" Jan 31 05:51:28 crc kubenswrapper[4832]: E0131 05:51:28.559427 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a5d59d3654292408696c8de6c43b4d2d6ae2588219cca367b92293edd6bdfe4\": container with ID starting with 3a5d59d3654292408696c8de6c43b4d2d6ae2588219cca367b92293edd6bdfe4 not found: ID does not exist" containerID="3a5d59d3654292408696c8de6c43b4d2d6ae2588219cca367b92293edd6bdfe4" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.559464 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5d59d3654292408696c8de6c43b4d2d6ae2588219cca367b92293edd6bdfe4"} err="failed to get container status \"3a5d59d3654292408696c8de6c43b4d2d6ae2588219cca367b92293edd6bdfe4\": rpc error: code = NotFound desc = could not find container \"3a5d59d3654292408696c8de6c43b4d2d6ae2588219cca367b92293edd6bdfe4\": container with ID starting with 3a5d59d3654292408696c8de6c43b4d2d6ae2588219cca367b92293edd6bdfe4 not found: ID does not exist" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.559487 4832 scope.go:117] "RemoveContainer" containerID="310c7760901d90eb51cf03102f79e153cbacee7c20d589ccdcedd1fbb67fd807" Jan 31 05:51:28 crc kubenswrapper[4832]: E0131 05:51:28.559863 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"310c7760901d90eb51cf03102f79e153cbacee7c20d589ccdcedd1fbb67fd807\": container with ID starting with 310c7760901d90eb51cf03102f79e153cbacee7c20d589ccdcedd1fbb67fd807 not found: ID does not exist" containerID="310c7760901d90eb51cf03102f79e153cbacee7c20d589ccdcedd1fbb67fd807" Jan 31 05:51:28 crc kubenswrapper[4832]: I0131 05:51:28.559902 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310c7760901d90eb51cf03102f79e153cbacee7c20d589ccdcedd1fbb67fd807"} err="failed to get container status \"310c7760901d90eb51cf03102f79e153cbacee7c20d589ccdcedd1fbb67fd807\": rpc error: code = NotFound desc = could not find container \"310c7760901d90eb51cf03102f79e153cbacee7c20d589ccdcedd1fbb67fd807\": container with ID starting with 310c7760901d90eb51cf03102f79e153cbacee7c20d589ccdcedd1fbb67fd807 not found: ID does not exist" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.085413 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-d4lls_4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5/kube-rbac-proxy/0.log" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.186234 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-d4lls_4fdf3988-1bb7-4cd7-83fb-b656b3ea1ec5/controller/0.log" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.343160 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-frr-files/0.log" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.498512 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-reloader/0.log" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.530355 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-frr-files/0.log" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.531954 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-metrics/0.log" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.573739 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-reloader/0.log" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.700898 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-reloader/0.log" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.714958 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-metrics/0.log" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.715953 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-frr-files/0.log" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.779057 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-metrics/0.log" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.874117 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb35761-f014-43c2-8e9a-9d63fb9784f2" path="/var/lib/kubelet/pods/0bb35761-f014-43c2-8e9a-9d63fb9784f2/volumes" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.924230 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-frr-files/0.log" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.931922 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/controller/0.log" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.932810 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-reloader/0.log" Jan 31 05:51:29 crc kubenswrapper[4832]: I0131 05:51:29.961392 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/cp-metrics/0.log" Jan 31 05:51:30 crc kubenswrapper[4832]: I0131 05:51:30.600022 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/kube-rbac-proxy/0.log" Jan 31 05:51:30 crc kubenswrapper[4832]: I0131 05:51:30.643114 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/kube-rbac-proxy-frr/0.log" Jan 31 05:51:30 crc kubenswrapper[4832]: I0131 05:51:30.648255 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/frr-metrics/0.log" Jan 31 05:51:30 crc kubenswrapper[4832]: I0131 05:51:30.790307 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/reloader/0.log" Jan 31 05:51:30 crc kubenswrapper[4832]: I0131 05:51:30.882503 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-wd6p6_c10722d0-a029-4829-87c5-3f4340ea19ff/frr-k8s-webhook-server/0.log" Jan 31 05:51:31 crc kubenswrapper[4832]: I0131 05:51:31.194884 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8567bf5564-5cjq7_873c6fd7-9f23-4376-96f6-3e8a19b56593/manager/0.log" Jan 31 05:51:31 crc kubenswrapper[4832]: I0131 05:51:31.272704 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-69fbdc97fc-wxfb2_01a0d1ce-012e-4200-ac92-995c0f1a2d1c/webhook-server/0.log" Jan 31 05:51:31 crc kubenswrapper[4832]: I0131 05:51:31.435720 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cbkcm_b8624b4c-df9a-43b3-8f8c-99b9290a7956/kube-rbac-proxy/0.log" Jan 31 05:51:31 crc kubenswrapper[4832]: I0131 05:51:31.980735 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cbkcm_b8624b4c-df9a-43b3-8f8c-99b9290a7956/speaker/0.log" Jan 31 05:51:32 crc kubenswrapper[4832]: I0131 05:51:32.020996 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mkfvm_69815ead-05b6-4300-b463-b8781a92335c/frr/0.log" Jan 31 05:51:44 crc kubenswrapper[4832]: I0131 05:51:44.601192 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk_265071a1-233b-4945-b4d0-e6f20a6b4ab2/util/0.log" Jan 31 05:51:44 crc kubenswrapper[4832]: I0131 05:51:44.833524 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk_265071a1-233b-4945-b4d0-e6f20a6b4ab2/pull/0.log" Jan 31 05:51:44 crc kubenswrapper[4832]: I0131 05:51:44.850719 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk_265071a1-233b-4945-b4d0-e6f20a6b4ab2/pull/0.log" Jan 31 05:51:44 crc kubenswrapper[4832]: I0131 05:51:44.916284 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk_265071a1-233b-4945-b4d0-e6f20a6b4ab2/util/0.log" Jan 31 05:51:45 crc kubenswrapper[4832]: I0131 05:51:45.018988 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk_265071a1-233b-4945-b4d0-e6f20a6b4ab2/util/0.log" Jan 31 05:51:45 crc kubenswrapper[4832]: I0131 05:51:45.071792 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk_265071a1-233b-4945-b4d0-e6f20a6b4ab2/extract/0.log" Jan 31 05:51:45 crc kubenswrapper[4832]: I0131 05:51:45.203145 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p_8c73b6e9-4228-4e24-bdd7-18f6980c3bc7/util/0.log" Jan 31 05:51:45 crc kubenswrapper[4832]: I0131 05:51:45.220678 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcsqwrk_265071a1-233b-4945-b4d0-e6f20a6b4ab2/pull/0.log" Jan 31 05:51:45 crc kubenswrapper[4832]: I0131 05:51:45.404355 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p_8c73b6e9-4228-4e24-bdd7-18f6980c3bc7/pull/0.log" Jan 31 05:51:45 crc kubenswrapper[4832]: I0131 05:51:45.406067 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p_8c73b6e9-4228-4e24-bdd7-18f6980c3bc7/util/0.log" Jan 31 05:51:45 crc kubenswrapper[4832]: I0131 05:51:45.407868 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p_8c73b6e9-4228-4e24-bdd7-18f6980c3bc7/pull/0.log" Jan 31 05:51:45 crc kubenswrapper[4832]: I0131 05:51:45.578999 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p_8c73b6e9-4228-4e24-bdd7-18f6980c3bc7/util/0.log" Jan 31 05:51:45 crc kubenswrapper[4832]: I0131 05:51:45.613497 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p_8c73b6e9-4228-4e24-bdd7-18f6980c3bc7/pull/0.log" Jan 31 05:51:45 crc kubenswrapper[4832]: I0131 05:51:45.618030 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cjh5p_8c73b6e9-4228-4e24-bdd7-18f6980c3bc7/extract/0.log" Jan 31 05:51:45 crc kubenswrapper[4832]: I0131 05:51:45.784753 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-crxls_f9f544b6-248c-4f10-8c30-4a976fb6a35c/extract-utilities/0.log" Jan 31 05:51:45 crc kubenswrapper[4832]: I0131 05:51:45.880599 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-crxls_f9f544b6-248c-4f10-8c30-4a976fb6a35c/extract-content/0.log" Jan 31 05:51:45 crc kubenswrapper[4832]: I0131 05:51:45.910260 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-crxls_f9f544b6-248c-4f10-8c30-4a976fb6a35c/extract-utilities/0.log" Jan 31 05:51:45 crc kubenswrapper[4832]: I0131 05:51:45.918134 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-crxls_f9f544b6-248c-4f10-8c30-4a976fb6a35c/extract-content/0.log" Jan 31 05:51:46 crc kubenswrapper[4832]: I0131 05:51:46.107073 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-crxls_f9f544b6-248c-4f10-8c30-4a976fb6a35c/extract-utilities/0.log" Jan 31 05:51:46 crc kubenswrapper[4832]: I0131 05:51:46.131929 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-crxls_f9f544b6-248c-4f10-8c30-4a976fb6a35c/extract-content/0.log" Jan 31 05:51:46 crc kubenswrapper[4832]: I0131 05:51:46.324153 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgrt2_271e0384-a8f3-41b0-a543-28210590699c/extract-utilities/0.log" Jan 31 05:51:46 crc kubenswrapper[4832]: I0131 05:51:46.552763 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgrt2_271e0384-a8f3-41b0-a543-28210590699c/extract-content/0.log" Jan 31 05:51:46 crc kubenswrapper[4832]: I0131 05:51:46.602715 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgrt2_271e0384-a8f3-41b0-a543-28210590699c/extract-utilities/0.log" Jan 31 05:51:46 crc kubenswrapper[4832]: I0131 05:51:46.604421 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgrt2_271e0384-a8f3-41b0-a543-28210590699c/extract-content/0.log" Jan 31 05:51:46 crc kubenswrapper[4832]: I0131 05:51:46.812702 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-crxls_f9f544b6-248c-4f10-8c30-4a976fb6a35c/registry-server/0.log" Jan 31 05:51:46 crc kubenswrapper[4832]: I0131 05:51:46.836676 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgrt2_271e0384-a8f3-41b0-a543-28210590699c/extract-content/0.log" Jan 31 05:51:46 crc kubenswrapper[4832]: I0131 05:51:46.888195 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgrt2_271e0384-a8f3-41b0-a543-28210590699c/extract-utilities/0.log" Jan 31 05:51:47 crc kubenswrapper[4832]: I0131 05:51:47.099583 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tglv7_a23cd15a-ae33-49a9-bf22-0f0e4786b18f/marketplace-operator/0.log" Jan 31 05:51:47 crc kubenswrapper[4832]: I0131 05:51:47.253932 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cqt2r_9a66457d-ec6e-439a-894d-a2ce2519bf0c/extract-utilities/0.log" Jan 31 05:51:47 crc kubenswrapper[4832]: I0131 05:51:47.444939 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cqt2r_9a66457d-ec6e-439a-894d-a2ce2519bf0c/extract-utilities/0.log" Jan 31 05:51:47 crc kubenswrapper[4832]: I0131 05:51:47.511585 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cqt2r_9a66457d-ec6e-439a-894d-a2ce2519bf0c/extract-content/0.log" Jan 31 05:51:47 crc kubenswrapper[4832]: I0131 05:51:47.639780 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cqt2r_9a66457d-ec6e-439a-894d-a2ce2519bf0c/extract-content/0.log" Jan 31 05:51:47 crc kubenswrapper[4832]: I0131 05:51:47.692893 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cqt2r_9a66457d-ec6e-439a-894d-a2ce2519bf0c/extract-utilities/0.log" Jan 31 05:51:47 crc kubenswrapper[4832]: I0131 05:51:47.712492 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qgrt2_271e0384-a8f3-41b0-a543-28210590699c/registry-server/0.log" Jan 31 05:51:47 crc kubenswrapper[4832]: I0131 05:51:47.716589 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cqt2r_9a66457d-ec6e-439a-894d-a2ce2519bf0c/extract-content/0.log" Jan 31 05:51:47 crc kubenswrapper[4832]: I0131 05:51:47.936176 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zk6kb_68108ffd-eb09-4ae3-a4d4-0316d20d0feb/extract-utilities/0.log" Jan 31 05:51:47 crc kubenswrapper[4832]: I0131 05:51:47.952673 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cqt2r_9a66457d-ec6e-439a-894d-a2ce2519bf0c/registry-server/0.log" Jan 31 05:51:48 crc kubenswrapper[4832]: I0131 05:51:48.102328 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zk6kb_68108ffd-eb09-4ae3-a4d4-0316d20d0feb/extract-content/0.log" Jan 31 05:51:48 crc kubenswrapper[4832]: I0131 05:51:48.116175 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zk6kb_68108ffd-eb09-4ae3-a4d4-0316d20d0feb/extract-utilities/0.log" Jan 31 05:51:48 crc kubenswrapper[4832]: I0131 05:51:48.179547 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zk6kb_68108ffd-eb09-4ae3-a4d4-0316d20d0feb/extract-content/0.log" Jan 31 05:51:48 crc kubenswrapper[4832]: I0131 05:51:48.369828 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zk6kb_68108ffd-eb09-4ae3-a4d4-0316d20d0feb/extract-utilities/0.log" Jan 31 05:51:48 crc kubenswrapper[4832]: I0131 05:51:48.414688 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zk6kb_68108ffd-eb09-4ae3-a4d4-0316d20d0feb/extract-content/0.log" Jan 31 05:51:48 crc kubenswrapper[4832]: I0131 05:51:48.539580 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:51:48 crc kubenswrapper[4832]: I0131 05:51:48.539629 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:51:48 crc kubenswrapper[4832]: I0131 05:51:48.811416 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zk6kb_68108ffd-eb09-4ae3-a4d4-0316d20d0feb/registry-server/0.log" Jan 31 05:52:18 crc kubenswrapper[4832]: I0131 05:52:18.539698 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:52:18 crc kubenswrapper[4832]: I0131 05:52:18.540223 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:52:18 crc kubenswrapper[4832]: I0131 05:52:18.540269 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 05:52:18 crc kubenswrapper[4832]: I0131 05:52:18.540974 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"23b71de00d0cbef4cecaa660543227a2bad0a75eb2e3690350e57d0d4fde275d"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:52:18 crc kubenswrapper[4832]: I0131 05:52:18.541047 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://23b71de00d0cbef4cecaa660543227a2bad0a75eb2e3690350e57d0d4fde275d" gracePeriod=600 Jan 31 05:52:18 crc kubenswrapper[4832]: E0131 05:52:18.759780 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c5f0a80_5a4f_4583_88d0_5e504d87d00a.slice/crio-conmon-23b71de00d0cbef4cecaa660543227a2bad0a75eb2e3690350e57d0d4fde275d.scope\": RecentStats: unable to find data in memory cache]" Jan 31 05:52:19 crc kubenswrapper[4832]: I0131 05:52:19.243703 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="23b71de00d0cbef4cecaa660543227a2bad0a75eb2e3690350e57d0d4fde275d" exitCode=0 Jan 31 05:52:19 crc kubenswrapper[4832]: I0131 05:52:19.243785 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"23b71de00d0cbef4cecaa660543227a2bad0a75eb2e3690350e57d0d4fde275d"} Jan 31 05:52:19 crc kubenswrapper[4832]: I0131 05:52:19.243855 4832 scope.go:117] "RemoveContainer" containerID="7bc22d4c3119311c28b1a9f56e482a57a4821652eae1fe686281e7b3a301c522" Jan 31 05:52:20 crc kubenswrapper[4832]: I0131 05:52:20.256131 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerStarted","Data":"b83f7479cef855cf5da3ce244d129ffc457dcbe524c374c97f68b04e6cea4591"} Jan 31 05:53:36 crc kubenswrapper[4832]: I0131 05:53:36.001131 4832 generic.go:334] "Generic (PLEG): container finished" podID="34ea1ae5-6f7e-43c9-9b89-2458144c2d2c" containerID="f82f97d0753823692d5e89c917bde7bf41a89faea6e2ba6dbce26e49be25e183" exitCode=0 Jan 31 05:53:36 crc kubenswrapper[4832]: I0131 05:53:36.001213 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b8n6x/must-gather-wg4rj" event={"ID":"34ea1ae5-6f7e-43c9-9b89-2458144c2d2c","Type":"ContainerDied","Data":"f82f97d0753823692d5e89c917bde7bf41a89faea6e2ba6dbce26e49be25e183"} Jan 31 05:53:36 crc kubenswrapper[4832]: I0131 05:53:36.002342 4832 scope.go:117] "RemoveContainer" containerID="f82f97d0753823692d5e89c917bde7bf41a89faea6e2ba6dbce26e49be25e183" Jan 31 05:53:36 crc kubenswrapper[4832]: I0131 05:53:36.178859 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b8n6x_must-gather-wg4rj_34ea1ae5-6f7e-43c9-9b89-2458144c2d2c/gather/0.log" Jan 31 05:53:47 crc kubenswrapper[4832]: I0131 05:53:47.519550 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-b8n6x/must-gather-wg4rj"] Jan 31 05:53:47 crc kubenswrapper[4832]: I0131 05:53:47.520258 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-b8n6x/must-gather-wg4rj" podUID="34ea1ae5-6f7e-43c9-9b89-2458144c2d2c" containerName="copy" containerID="cri-o://b8dcd42b99c6e8d7c1c4d342dd4cc3b5239af1ccaa23e61d4577721a1b28273c" gracePeriod=2 Jan 31 05:53:47 crc kubenswrapper[4832]: I0131 05:53:47.530808 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-b8n6x/must-gather-wg4rj"] Jan 31 05:53:47 crc kubenswrapper[4832]: I0131 05:53:47.968105 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b8n6x_must-gather-wg4rj_34ea1ae5-6f7e-43c9-9b89-2458144c2d2c/copy/0.log" Jan 31 05:53:47 crc kubenswrapper[4832]: I0131 05:53:47.968514 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/must-gather-wg4rj" Jan 31 05:53:48 crc kubenswrapper[4832]: I0131 05:53:48.125596 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b8n6x_must-gather-wg4rj_34ea1ae5-6f7e-43c9-9b89-2458144c2d2c/copy/0.log" Jan 31 05:53:48 crc kubenswrapper[4832]: I0131 05:53:48.126413 4832 generic.go:334] "Generic (PLEG): container finished" podID="34ea1ae5-6f7e-43c9-9b89-2458144c2d2c" containerID="b8dcd42b99c6e8d7c1c4d342dd4cc3b5239af1ccaa23e61d4577721a1b28273c" exitCode=143 Jan 31 05:53:48 crc kubenswrapper[4832]: I0131 05:53:48.126464 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8n6x/must-gather-wg4rj" Jan 31 05:53:48 crc kubenswrapper[4832]: I0131 05:53:48.126469 4832 scope.go:117] "RemoveContainer" containerID="b8dcd42b99c6e8d7c1c4d342dd4cc3b5239af1ccaa23e61d4577721a1b28273c" Jan 31 05:53:48 crc kubenswrapper[4832]: I0131 05:53:48.146461 4832 scope.go:117] "RemoveContainer" containerID="f82f97d0753823692d5e89c917bde7bf41a89faea6e2ba6dbce26e49be25e183" Jan 31 05:53:48 crc kubenswrapper[4832]: I0131 05:53:48.152748 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b86cz\" (UniqueName: \"kubernetes.io/projected/34ea1ae5-6f7e-43c9-9b89-2458144c2d2c-kube-api-access-b86cz\") pod \"34ea1ae5-6f7e-43c9-9b89-2458144c2d2c\" (UID: \"34ea1ae5-6f7e-43c9-9b89-2458144c2d2c\") " Jan 31 05:53:48 crc kubenswrapper[4832]: I0131 05:53:48.152984 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/34ea1ae5-6f7e-43c9-9b89-2458144c2d2c-must-gather-output\") pod \"34ea1ae5-6f7e-43c9-9b89-2458144c2d2c\" (UID: \"34ea1ae5-6f7e-43c9-9b89-2458144c2d2c\") " Jan 31 05:53:48 crc kubenswrapper[4832]: I0131 05:53:48.158353 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ea1ae5-6f7e-43c9-9b89-2458144c2d2c-kube-api-access-b86cz" (OuterVolumeSpecName: "kube-api-access-b86cz") pod "34ea1ae5-6f7e-43c9-9b89-2458144c2d2c" (UID: "34ea1ae5-6f7e-43c9-9b89-2458144c2d2c"). InnerVolumeSpecName "kube-api-access-b86cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:53:48 crc kubenswrapper[4832]: I0131 05:53:48.226934 4832 scope.go:117] "RemoveContainer" containerID="b8dcd42b99c6e8d7c1c4d342dd4cc3b5239af1ccaa23e61d4577721a1b28273c" Jan 31 05:53:48 crc kubenswrapper[4832]: E0131 05:53:48.231038 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8dcd42b99c6e8d7c1c4d342dd4cc3b5239af1ccaa23e61d4577721a1b28273c\": container with ID starting with b8dcd42b99c6e8d7c1c4d342dd4cc3b5239af1ccaa23e61d4577721a1b28273c not found: ID does not exist" containerID="b8dcd42b99c6e8d7c1c4d342dd4cc3b5239af1ccaa23e61d4577721a1b28273c" Jan 31 05:53:48 crc kubenswrapper[4832]: I0131 05:53:48.231073 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8dcd42b99c6e8d7c1c4d342dd4cc3b5239af1ccaa23e61d4577721a1b28273c"} err="failed to get container status \"b8dcd42b99c6e8d7c1c4d342dd4cc3b5239af1ccaa23e61d4577721a1b28273c\": rpc error: code = NotFound desc = could not find container \"b8dcd42b99c6e8d7c1c4d342dd4cc3b5239af1ccaa23e61d4577721a1b28273c\": container with ID starting with b8dcd42b99c6e8d7c1c4d342dd4cc3b5239af1ccaa23e61d4577721a1b28273c not found: ID does not exist" Jan 31 05:53:48 crc kubenswrapper[4832]: I0131 05:53:48.231093 4832 scope.go:117] "RemoveContainer" containerID="f82f97d0753823692d5e89c917bde7bf41a89faea6e2ba6dbce26e49be25e183" Jan 31 05:53:48 crc kubenswrapper[4832]: E0131 05:53:48.234683 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f82f97d0753823692d5e89c917bde7bf41a89faea6e2ba6dbce26e49be25e183\": container with ID starting with f82f97d0753823692d5e89c917bde7bf41a89faea6e2ba6dbce26e49be25e183 not found: ID does not exist" containerID="f82f97d0753823692d5e89c917bde7bf41a89faea6e2ba6dbce26e49be25e183" Jan 31 05:53:48 crc kubenswrapper[4832]: I0131 05:53:48.234725 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f82f97d0753823692d5e89c917bde7bf41a89faea6e2ba6dbce26e49be25e183"} err="failed to get container status \"f82f97d0753823692d5e89c917bde7bf41a89faea6e2ba6dbce26e49be25e183\": rpc error: code = NotFound desc = could not find container \"f82f97d0753823692d5e89c917bde7bf41a89faea6e2ba6dbce26e49be25e183\": container with ID starting with f82f97d0753823692d5e89c917bde7bf41a89faea6e2ba6dbce26e49be25e183 not found: ID does not exist" Jan 31 05:53:48 crc kubenswrapper[4832]: I0131 05:53:48.261917 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b86cz\" (UniqueName: \"kubernetes.io/projected/34ea1ae5-6f7e-43c9-9b89-2458144c2d2c-kube-api-access-b86cz\") on node \"crc\" DevicePath \"\"" Jan 31 05:53:48 crc kubenswrapper[4832]: I0131 05:53:48.330527 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34ea1ae5-6f7e-43c9-9b89-2458144c2d2c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "34ea1ae5-6f7e-43c9-9b89-2458144c2d2c" (UID: "34ea1ae5-6f7e-43c9-9b89-2458144c2d2c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:53:48 crc kubenswrapper[4832]: I0131 05:53:48.363230 4832 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/34ea1ae5-6f7e-43c9-9b89-2458144c2d2c-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 05:53:49 crc kubenswrapper[4832]: I0131 05:53:49.875298 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ea1ae5-6f7e-43c9-9b89-2458144c2d2c" path="/var/lib/kubelet/pods/34ea1ae5-6f7e-43c9-9b89-2458144c2d2c/volumes" Jan 31 05:54:48 crc kubenswrapper[4832]: I0131 05:54:48.540390 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:54:48 crc kubenswrapper[4832]: I0131 05:54:48.540909 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:55:18 crc kubenswrapper[4832]: I0131 05:55:18.539720 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:55:18 crc kubenswrapper[4832]: I0131 05:55:18.540407 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:55:48 crc kubenswrapper[4832]: I0131 05:55:48.540006 4832 patch_prober.go:28] interesting pod/machine-config-daemon-bw458 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:55:48 crc kubenswrapper[4832]: I0131 05:55:48.540606 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:55:48 crc kubenswrapper[4832]: I0131 05:55:48.540652 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bw458" Jan 31 05:55:48 crc kubenswrapper[4832]: I0131 05:55:48.541460 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b83f7479cef855cf5da3ce244d129ffc457dcbe524c374c97f68b04e6cea4591"} pod="openshift-machine-config-operator/machine-config-daemon-bw458" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:55:48 crc kubenswrapper[4832]: I0131 05:55:48.541518 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerName="machine-config-daemon" containerID="cri-o://b83f7479cef855cf5da3ce244d129ffc457dcbe524c374c97f68b04e6cea4591" gracePeriod=600 Jan 31 05:55:48 crc kubenswrapper[4832]: E0131 05:55:48.681252 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:55:49 crc kubenswrapper[4832]: I0131 05:55:49.577744 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" containerID="b83f7479cef855cf5da3ce244d129ffc457dcbe524c374c97f68b04e6cea4591" exitCode=0 Jan 31 05:55:49 crc kubenswrapper[4832]: I0131 05:55:49.577821 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bw458" event={"ID":"3c5f0a80-5a4f-4583-88d0-5e504d87d00a","Type":"ContainerDied","Data":"b83f7479cef855cf5da3ce244d129ffc457dcbe524c374c97f68b04e6cea4591"} Jan 31 05:55:49 crc kubenswrapper[4832]: I0131 05:55:49.577865 4832 scope.go:117] "RemoveContainer" containerID="23b71de00d0cbef4cecaa660543227a2bad0a75eb2e3690350e57d0d4fde275d" Jan 31 05:55:49 crc kubenswrapper[4832]: I0131 05:55:49.579032 4832 scope.go:117] "RemoveContainer" containerID="b83f7479cef855cf5da3ce244d129ffc457dcbe524c374c97f68b04e6cea4591" Jan 31 05:55:49 crc kubenswrapper[4832]: E0131 05:55:49.579810 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:56:03 crc kubenswrapper[4832]: I0131 05:56:03.860631 4832 scope.go:117] "RemoveContainer" containerID="b83f7479cef855cf5da3ce244d129ffc457dcbe524c374c97f68b04e6cea4591" Jan 31 05:56:03 crc kubenswrapper[4832]: E0131 05:56:03.861392 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:56:15 crc kubenswrapper[4832]: I0131 05:56:15.859774 4832 scope.go:117] "RemoveContainer" containerID="b83f7479cef855cf5da3ce244d129ffc457dcbe524c374c97f68b04e6cea4591" Jan 31 05:56:15 crc kubenswrapper[4832]: E0131 05:56:15.860999 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a" Jan 31 05:56:28 crc kubenswrapper[4832]: I0131 05:56:28.859653 4832 scope.go:117] "RemoveContainer" containerID="b83f7479cef855cf5da3ce244d129ffc457dcbe524c374c97f68b04e6cea4591" Jan 31 05:56:28 crc kubenswrapper[4832]: E0131 05:56:28.860992 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bw458_openshift-machine-config-operator(3c5f0a80-5a4f-4583-88d0-5e504d87d00a)\"" pod="openshift-machine-config-operator/machine-config-daemon-bw458" podUID="3c5f0a80-5a4f-4583-88d0-5e504d87d00a"